Release Notes

Documentation

VoltDB Home » Documentation » Release Notes

Release Notes


Product

VoltDB

Version

6.1.4

Release Date

February 3, 2017

This document provides information about known issues and limitations to the current release of VoltDB. If you encounter any problems not listed below, please be sure to report them to [email protected]. Thank you.

Special Considerations for Existing Customers

There are a few changes to existing functionality that could impact applications when upgrading from pre-V6 versions. Existing customers should take note of of the following changes:

  • New default query timeout

    By default, VoltDB now sets a system-wide timeout for read-only queries of 10 seconds. Previously, there was no default timeout. As a result, applications that ran successfully in the past may encounter timeout errors for long-running queries.

    If the new default is causing problems, you can change the default timeout in the deployment file when starting the database (or by updating a running database). Alternately, you can set a query-specific timeout by using the Java method callProcedureWithTimeout when invoking problematic queries. See the section on "Query Timeout" in the VoltDB Administrator's Guide for details.

  • New default memory resource limit

    VoltDB also now sets a default resource limit on memory usage of 80%. If memory usage exceeds this limit, the database is set to read-only mode. The resource limit is designed to avoid the server process running out of memory and crashing. You can set a different default memory resource limit in the deployment file using the <resourcemonitor> element under <systemsettings>. However, setting a higher memory limit is not recommended. See the section on resource monitoring in the VoltDB Administrator's Guide for details.

  • New guard against ambiguous column references in SQL queries

    Previously, it was possible to define queries with ambiguous column references — where columns from different tables or subqueries have the same name but do not include an explicit table prefix. The VoltDB planner no longer allows such ambiguous queries. You must include a table prefix for any columns that could come from more than one of the joined tables. This change is compliant with the SQL standard and consistent with other database vendors.

    If your application includes queries with ambiguous column references, you will need to rewrite the queries to be unambiguous before they can be executed (or stored procedures loaded) in VoltDB 6.0 and later. See the note later in the release notes for more details about correcting ambiguous column references.

Upgrading From Older Versions

The process for upgrading from a previous version of VoltDB is as follows:

  1. Place the database in admin mode (using voltadmin pause).

  2. Perform a manual snapshot of the database (using voltadmin save --blocking).

  3. Shutdown the database (using voltadmin shutdown).

  4. Upgrade the VoltDB software.

  5. Restart the database (using the voltdb create --force action).

  6. Restore the snapshot created in Step #2 (using voltadmin restore).

  7. Return the database to normal operations (using voltadmin resume).

For customers upgrading from version 5.9 or later, it is possible to minimize the downtime required for the upgrade by performing a "hot" upgrade across clusters using database replication (DR). See the section on upgrading across clusters in the VoltDB Administrator's Guide for more information.

For customers upgrading from V4.x or earlier releases of VoltDB, please see the V4.0 Upgrade Notes.

Changes Since the Last Release

Users of previous versions of VoltDB should take note of the following changes that might impact their existing applications.

1. Release V6.1.4 (February 3, 2017)

1.1.

Recent improvement

The following improvement has been made:

  • Previously, VoltDB created a file (log/volt.log) in the user's current working directory whenever you started the sqlcmd utility. However, no messages were ever written to the log, so this superfluous file is no longer created.

2. Release V6.1.3 (December 2, 2016)

2.1.

Recent improvement

The following limitation has been resolved:

  • Previously, if updating a view took too long (for example, if the table contained significant amounts of data before the view is created), the statement could fail, causing the database server process to stop on either an individual node or the cluster as a whole. This issue has been resolved.

3. Release V6.1.2 (June 29, 2016)

The following issue was resolved in this release.

3.1.

During database replication, a node failure on the replica node could cause the cluster to crash.

In VoltDB clusters, the node managing multi-part transactions is known as the multi-part initiator (MPI). If during DR the replica node acting as the MPI stops, intentionally or by accident, the cluster could fail with the fatal error message "Attempting to apply same binary log segment more than once". This issue has been resolved.

4. Release V6.1.1 (April 27, 2016)

4.1.

Recent improvements

The following limitations have been resolved:

  • Previously, an UPSERT statement specifying a subset of columns would update the values of all columns in the table row. This issue has been corrected.

  • There was an edge case when using database replication (DR) where if, after DR started, a K-safe cluster stopped and recovered, then one node failed and rejoined, and finally another node on the cluster stopped, the first node would also stop. This issue has been resolved.

  • There is an issue in previous releases with conflict resolution in cross datacenter replication (XDCR). Timestamp mismatches were not resolved correctly, causing the two databases to diverge. However the conflict log reported no conflict. The conflict resolution process has now been corrected.

  • There was an issue where resetting a DR cluster (with the voltadmin DR RESET command) while a node is actively processing write transactions could cause the node to fail with a segmentation fault. This issue has been resolved.

5. Release V6.1 (March 4, 2016)

5.1.

New streaming data capabilities

This release introduces a new concept in VoltDB: streams. Streams act like virtual tables. You declare them like tables using the CREATE STREAM statement and you insert data into the stream using the INSERT statement. However, data inserted into a stream is not actually stored in the database. Data inserted into a stream can be analyzed (using views) and streamed directly to other business systems (using export).

To analyze streaming data, you define a stream using the CREATE STREAM statement, then define a view on that stream using the CREATE VIEW statement. This view allows you to perform summary analysis on the data as it passes through the database without paying the penalty of actually storing the data, all in a transactionally consistent way.

Although you cannot modify the underlying data of such views — because the stream is transient — views on streams are unique in that you can update the view itself if needed. For example, you can create a daily summary of a stream by resetting the view's values to zero at midnight using a DELETE FROM {view-name} or UPDATE {view-name} statement.

You can also export data from streams to external systems using VoltDB's existing export infrastructure. In fact, streams replace the old EXPORT TABLE concept. Instead of defining a table then declaring it as an export table, you now define a stream and assign it to an export target all in one statement. For example:

CREATE STREAM visits 
  EXPORT TO TARGET archive (
    user_id BIGINT NOT NULL,
    login TIMESTAMP
);

In the export deployment configuration, the old stream attribute is now replaced by target, to make the terminology consistent. Note that, although the EXPORT TABLE DDL statement and the deployment stream attribute are now deprecated, they will still be supported for backwards compatibility until some future major release.

See the description of the CREATE STREAM statement in the Using VoltDB manual for more information.

5.2.

Support for indexes on geospatial GEOGRAPHY columns

VoltDB now supports GEOGRAPHY columns in indexes. The index can be applied to instances of the CONTAINS() function where the indexed column is the first argument. For example, an index including the GEOGRAPHY column border could optimize the following query:

SELECT assets.id, counties.county, counties.state 
    FROM counties, assets
    WHERE CONTAINS(counties.border, assets.loc);

5.3.

New boolean function for measuring geospatial distances

The new geospatial function DWITHIN() determines if two geospatial values (two points or a point and a geographical region) are within a specified distance of each other. For example, the following query returns all the restaurants within 5,000 meter of a tourist:

SELECT r.name, r.address, DISTANCE(r.loc,t.loc)
    FROM restaurants AS r, tourists AS t
    WHERE DWITHIN(r.loc, t.loc,5000) AND t.id = ?
    ORDER BY DISTANCE(r.loc,t.loc) ASC;

5.4.

Ability to load geospatial values with the csvloader

The csvloader now supports the ability to load values into geospatial columns (GEOGRAPHY or GEOGRAPHY_POINT) by including the values as well-known text (WKT) in the CSV input. See the chapter on "Creating Geospatial Applications" in the VoltDB Guide to Performance and Customization for information on creating WKT compatible with the grospatial datatypes.

5.5.

Beta release of VoltDB Deployment Manager

We are working on a new deployment process for configuring and starting VoltDB clusters. The VoltDB Deployment Manager is a daemon process that supports both a RESTful API for scripting and an fully interactive web interface. Although not ready for production use, a beta version is included in the current software kits. Customers interested in trying out the new Deployment Manager and providing feedback should contact VoltDB support for more information.

5.6.

Additional improvements

In addition to the new features and capabilities described above, the following limitations in previous versions have been resolved:

  • In previous releases, if the operator use the wrong command to rejoin a failed node to the cluster (using voltdb recover instead of voltdb rejoin) the cluster would fail. Now, the invalid operation fails but the cluster is not affected.

  • Previously, if the export subsystem could not write data to the export_overflow directory (for example, if the disk was full), the VoltDB server process would generate errors in the log but not stop. However, this behavior results in lost data. So, to preserve data integrity and durability, the server process now fails in this situation. In a K-safe cluster, the other nodes will continue, keeping the database online, until operators can address the system issues with the failed node and rejoin it to the cluster.

  • There was an issue where a SQL query would generate a run-time error if the query contained both a JOIN and a {column-value} IN {list} condition, where the column being evaluated was indexed. This problem has been corrected.

  • VoltDB V6.0 corrected many situations where ambiguous column references were previously allowed . However, there were still some edge cases that were not covered. Specifically, the ORDER BY clause in a JOIN query still allowed ambiguous column references. This issue has been resolved.

  • Previously, if a WHERE clause compared a VARCHAR column to a value (such as WHERE TEXT_COLUMN = '12345'), the column is indexed, and the value is longer than the maximum length of the column, then the query would generate a runtime error stating that the value exceeds the size of the column. However, the query is a comparison, not an insertion, so no error should be required. This condition has been corrected.

  • There was an issue in the Kafka importer where, if all records for a topic were imported (that is, there were no outstanding messages in the queue), if the database stopped and restarted with a voltdb recover command, the import would restart from the beginning rather than at the last imported record. This problem has been corrected.

  • In rare cases, the Kafka importer issued an error stating that it "failed to stop the import bundles" when the database schema or deployment file settings were changed. Although annoying, this error did not indicate any failure in the system itself. This misleading error message has been corrected.

  • There was an issue with SELECT queries of partitioned tables where, if the partitioning column was included in the selection list multiple times, and that column was also part of the GROUP BY clause, the query returned incorrect results. This issue has been resolved.

6. Release V6.0.1 (February 24, 2016)

6.1.

Performance tuning for cross datacenter replication (XDCR)

When running in virtualized environments, it is possible for the thread that creates binary logs for database replication (DR) to compete with transactions on the local cluster, causing occasional increased latency. To mitigate this situation, the initial size of the buffer for binary logs has been changed to 512KB, which is optimized for most workloads.

However, If your workload is observing long latencies when running cross datacenter replication (XDCR), you may need to adjust the size of the buffer. To change the default buffer size, set the VoltDB variable DDR_DEFAULT_BUFFER_SIZE before starting the database process, specifying the size in bytes:

export VOLTDB_OPTS="-DDR_DEFAULT_BUFFER_SIZE=nnnn"

6.2.

Recent improvements

The following limitations in previous versions have been resolved:

  • There was an issue with database replication (DR) where, if multiple transactions generated excessively large binary logs in a short period of time (more than 2 megabytes each in under a second) DR could fail, possibly taking the database cluster with it. The symptom when this occurred was that one or more nodes would fail with a DR buffer overflow error. This issue has now been corrected.

  • Previously, using the JDBC method getFloat() to retrieve a negative value or zero (<=0) would result in the JDBC interface throwing an exception. This issue has been resolved.

7. Release V6.0 (January 26, 2016)

7.1.

Updated operating system and software requirements

The operating system and software requirements for VoltDB have been updated based on changes to the supported versions of the underlying technologies. Specifically:

  • Ubuntu 10.4, Red Hat and CentOS releases prior to 6.6, and OS X 10.7 are no longer supported. The supported operating system versions are CentOS 6.6, CentOS 7.0, RHEL 6.6, RHEL 7.0 and Ubuntu 12.04 and 14.04, with support for OS X 10.8 and later as a development platform.

  • The VoltDB server process requires Java 8. The Java client library supports both Java 7 and 8.

  • The required version for the Python client and VoltDB command line utilities has been upgraded from 2.5 to 2.6.

7.2.

Memory resource monitoring is on by default

Resource monitoring is enabled by default with a memory limit of 80%. If memory usage exceeds this limit, the database is placed in read-only mode until usage drops below the limit. You can alter the resource limits in the deployment file. See the section on resource monitoring in the VoltDB Administrator's Guide for details.

7.3.

New geospatial datatypes and functions

VoltDB now supports two new datatypes, GEOGRAPHY and GEOGRAPHY_POINT, and several new functions optimized for geospatial data. These datatypes are fully integrated in the VoltDB durability features such as snapshots and command logging. However, tables containing geospatial columns are not currently supported for export or database replication (DR). Integration of these capabilities will be added in a future release.

7.4.

Kerberos support for VoltDB Management Center and JSON interface

VoltDB now supports Kerberos security for the VoltDB Management Center (VMC) and the JSON interface. To allow access from VMC and JSON, the server JAAS login configuration must include two additional entries for the Java Generic Security Service (JGSS): one for the VoltDB service principle and one for the server's HTTP service principle.

7.5.

JMX support deprecated

The VoltDB Enterprise Manager, which was deprecated in V5.0, has been removed from the kit. JMX support, which was added for the Enterprise Manager, is now deprecated. See the chapter on database monitoring in the VoltDB Administrator's Guide for alternative ways to monitor your database.

7.6.

Additional improvements

In addition to the new features and capabilities described above, the following limitations in previous versions have been resolved:

  • It was possible in an XDCR environment for the replay of binary logs from one cluster to interfere with the local transactions on the other cluster, resulting in high latency. The application of binary logs has been tuned to reduce the impact on the local client workload.

  • There was a condition where if, after using database replication (DR) and one of the cluster stops and recovers, the cluster could fail with a ConcurrentModificationException exception. This condition was caused by the partitions used for DR changing while the cluster was down and the partition mapping from one cluster to the other being out of sync. This issue has been resolved.

  • Another rare condition related to database replication (DR) involved certain indexes with the columns in a particular order where, if one of the columns contained a null value and the record was updated or deleted, replication would stop. This issue has been resolved.

  • The sizing worksheet (available in the Schema tab of the VoltDB Management Center) was prone to overestimate the minimum size of large (greater than 63 bytes) VARCHAR and VARBINARY columns. This issue has been resolved.

  • Previously, the VoltDB planner would accept ambiguous column references, where a column name shared by two or more tables or aliases appeared without a prefix. This behavior has been corrected to comply with the SQL standard. References to ambiguous column names now must include a disambiguating prefix.

Known Limitations

The following are known limitations to the current release of VoltDB. Workarounds are suggested where applicable. However, it is important to note that these limitations are considered temporary and are likely to be corrected in future releases of the product.

1. Command Logging

1.1.

Changing the deployment configuration when recovering command logs, can result in unexpected settings.

There is an issue where, if the command log contains schema changes (performed through interactive DDL statements, voltadmin update, or @UpdateApplicationCatalog), when the command logs are recovered, the previous deployment file settings are used, even if an alternate deployment file is specified on the voltdb recover command line. Then, after recovering the database, a new schema update can result in the deployment settings specified on the command line taking affect.

Until this issue is resolved, the safest workaround to ensure the desired configuration is achieved is to perform the voltdb recover operation without modifying the current deployment file, then make deployment changes with the voltadmin update command after the database has started.

1.2.

Command logs can only be recovered to a cluster of the same size.

To ensure complete and accurate restoration of a database, recovery using command logs can only be performed to a cluster with the same number of unique partitions as the cluster that created the logs. If you restart and recover to the same cluster with the same deployment options, there is no problem. But if you change the deployment options for number of nodes, sites per host, or K-safety, recovery may not be possible.

For example, if a four node cluster is running with four sites per host and a K-safety value of one, the cluster has two copies of eight unique partitions (4 X 4 / 2). If one server fails, you cannot recover the command logs from the original cluster to a new cluster made up of the remaining three nodes, because the new cluster only has six unique partitions (3 X 4 / 2). You must either replace the failed server to reinstate the original hardware configuration or otherwise change the deployment options to match the number of unique partitions. (For example, increasing the site per host to eight and K-safety to two.)

1.3.

Do not use the subfolder name "segments" for the command log snapshot directory.

VoltDB reserves the subfolder "segments" under the command log directory for storing the actual command log files. Do not add, remove, or modify any files in this directory. In particular, do not set the command log snapshot directory to a subfolder "segments" of the command log directory, or else the server will hang on startup.

2. Database Replication

2.1.

Some DR data may not be delivered if master database nodes fail and rejoin in rapid succession.

Because DR data is buffered on the master database and then delivered asynchronously to the replica, there is always the danger that data does not reach the replica if a master node stops. This situation is mitigated in a K-safe environment by all copies of a partition buffering on the master cluster. Then if a sending node goes down, another node on the master database can take over sending logs to the replica. However, if multiple nodes go down and rejoin in rapid succession, it is possible that some buffered DR data — from transactions when one or more nodes were down — could be lost when another node with the last copy of that buffer also goes down.

If this occurs and the replica recognizes that some binary logs are missing, DR stops and must be restarted.

To avoid this situation, especially when cycling through nodes for maintenance purposes, the key is to ensure that all buffered DR data is transmitted before stopping the next node in the cycle. You can do this using the @Statistics system procedure to make sure the last ACKed timestamp (using @Statistitcs DR on the master cluster) is later than the timestamp when the previous node completed its rejoin operation.

2.2.

Avoid bulk data operations within a single transaction when using database replication

Bulk operations, such as large deletes, inserts, or updates are possible within a single stored procedure. However, if the binary logs generated for DR are larger than 45MB, the operation will fail. To avoid this situation, it is best to break up large bulk operations into multiple, smaller transactions. A general rule of thumb is to multiply the size of the table schema by the number of affected rows. For deletes and inserts, this value should be under 45MB to avoid exceeding the DR binary log size limit. For updates, this number should be under 22.5MB (because the binary log contains both the starting and ending row values for updates).

2.3.

Database replication ignores resource limits

There are a number of VoltDB features that help manage the database by constraining memory size and resource utilization. These features are extremely useful in avoiding crashes as a result of unexpected or unconstrained growth. However, these features could interfere with the normal operation of DR when passing data from one cluster to another, especially if the two clusters are different sizes. Therefore, as a general rule of thumb, DR overrides these features in favor of maintaining synchronization between the two clusters.

Specifically, DR ignores any resource monitor limits defined in the deployment file when applying binary logs on the consumer cluster. DR also ignores any partition row limits defined in the database schema when applying binary logs. This means, for example, if the replica database in passive DR has less memory or fewer unique partitions than the master, it is possible that applying binary logs of transactions that succeeded on the master could cause the replica to run out of memory. Note that these resource monitor and tables row limits are applied on any original transactions local to the cluster (for example, transactions on the master database in passive DR).

3. Cross Datacenter Replication (XDCR)

3.1.

Avoid replicating tables without a unique index.

Part of the replication process for XDCR is to verify that the record's starting and ending states match on both clusters, otherwise known as conflict resolution. To do that, XDCR must find the record first. Finding uniquely indexed records is efficient; finding non-unique records is not and can impact overall database performance.

To make you aware of possible performance impact, VoltDB issues a warning if you declare a table as a DR table and it does not have a unique index.

3.2.

When starting XDCR for the first time, only one database can contain data.

You cannot start XDCR if both databases already have data in the DR tables. Only one of the two participating databases can have preexisting data when DR starts for the first time.

3.3.

During the initial synchronization of existing data, the receiving database is paused.

When starting XDCR for the first time, where one database already contains data, a snapshot of that data is sent to the other database. While receiving and processing that snapshot, the receiving database is paused. That is, it is in read-only mode. Once the snapshot is completed and the two database are synchronized, the receiving database is automatically unpaused, resuming normal read/write operations.

3.4.

A large number of multi-partition write transactions may interfere with the ability to restart XDCR after a cluster stops and recovers.

Normally, XDCR will automatically restart where it left off after one of the clusters stops and recovers from its command logs (using the voltdb recover command). However, if the workload is predominantly multi-partition write transactions, a failed cluster may not be able to restart XDCR after it recovers. In this case, XDCR must be restarted from scratch, using the content from one of the clusters as the source for synchronizing and recreating the other cluster (using the voltdb create --force command) without any content in the DR tables.

3.5.

A TRUNCATE TABLE transaction will be reported as a conflict with any other write operation to the same table.

When using XDCR, if the binary log from one cluster includes a TRUNCATE TABLE statement and the other cluster performs any write operation to the same table before the binary log is processed, the TRUNCATE TABLE operation will be reported as a conflict. Note that currently DELETE operations always supercede other actions, so the TRUNCATE TABLE will be executed on both clusters.

3.6.

Exceeding a LIMIT PARTITION ROWS constraint can generate multiple conflicts

It is possible to place a limit on the number of rows that any partition can hold for a specific table using the LIMIT PARTITION ROWS clause of the CREATE TABLE statement. When close to the limit, transactions on either or both clusters can exceed the limit simultaneously, resulting in a potentially large number of delete operations that then generate conflicts when the the associated binary log reaches the other cluster.

3.7.

Use of the VoltProcedure.getUniqueId method is unique to a cluster, not across clusters.

VoltDB provides a way to generate a deterministically unique ID within a stored procedure using the getUniqueId method. This method guarantees uniqueness within the current cluster. However, the method could generate the same ID on two distinct database clusters. Consequently, when using XDCR, you should combine the return values of VoltProcedure.getUniqueId with VoltProcedure.getClusterId, which returns the current cluster's unique DR ID, to generate IDs that are unique across all clusters in your environment.

4. Export

4.1.

Synchronous export in Kafka can use up all available file descriptors and crash the database.

A bug in the Apache Kafka client can result in file descriptors being allocated but not released if the producer.type attribute is set to "sync" (which is the default). The consequence is that the system eventually runs out of file descriptors and the VoltDB server process will crash.

Until this bug is fixed, use of synchronous Kafka export is not recommended. The workaround is to set the Kafka producer.type attribute to "async" using the VoltDB export properties.

4.2.

Export does not currently support geospatial datatypes.

VoltDB does not currently support exporting the geospatial datatypes GEOGRAPHY and GEOGRAPHY_POINT. Do not specify a table including columns of these types in an EXPORT TABLE statement.

5. Import

5.1.

Data may be lost if a Kafka broker stops during import.

If, while Kafka import is enabled, the Kafka broker that VoltDB is connected to stops (for example, if the server crashes or is taken down for maintenance), some messages may be lost between Kafka and VoltDB. To ensure no data is lost, we recommend you disable VoltDB import before taking down the associated Kafka broker. You can then re-enable import after the Kafka broker comes back online.

5.2.

Kafka import may be reset, resulting in duplicate entries.

There is an issue with Kafka and the VoltDB Kafka importer where the current pointer in the Kafka queue gets reset to zero. The consequence of this event is that items in the queue get imported a second time resulting in duplicate entries. This issue will be addressed in an upcoming release. In the meantime, if you are using the Kafka importer, contact [email protected] for details.

6. SQL and Stored Procedures

6.1.

Comments containing unmatched single quotes in multi-line statements can produce unexpected results.

When entering a multi-line statement at the sqlcmd prompt, if a line ends in a comment (indicated by two hyphens) and the comment contains an unmatched single quote character, the following lines of input are not interpreted correctly. Specifically, the comment is incorrectly interpreted as continuing until the next single quote character or a closing semi-colon is read. This is most likely to happen when reading in a schema file containing comments. This issue is specific to the sqlcmd utility.

A fix for this condition is planned for an upcoming point release

6.2.

Do not use assertions in VoltDB stored procedures.

VoltDB currently intercepts assertions as part of its handling of stored procedures. Attempts to use assertions in stored procedures for debugging or to find programmatic errors will not work as expected.

6.3.

The UPPER() and LOWER() functions currently convert ASCII characters only.

The UPPER() and LOWER() functions return a string converted to all uppercase or all lowercase letters, respectively. However, for the initial release, these functions only operate on characters in the ASCII character set. Other case-sensitive UTF-8 characters in the string are returned unchanged. Support for all case-sensitive UTF-8 characters will be included in a future release.

7. Client Interfaces

7.1.

Avoid using decimal datatypes with the C++ client interface on 32-bit platforms.

There is a problem with how the math library used to build the C++ client library handles large decimal values on 32-bit operating systems. As a result, the C++ library cannot serialize and pass Decimal datatypes reliably on these systems.

Note that the C++ client interface can send and receive Decimal values properly on 64-bit platforms.

8. Enterprise Manager

Important

The VoltDB Enterprise Manager is deprecated. It is supported for existing customers but is not recommended for new deployments and will be removed in a future release. The VoltDB Management Center, which has improved and extended management and monitoring capabilities built directly into the VoltDB database server, is the recommended replacement for the Enterprise Manager. See the Administrator's Guide for more information on the VoltDB Management Center.

8.1.

Manual snapshots not copied to the Management Server properly.

Normally, manual snapshots (those created with the Take a Snapshot button) are copied to the management server. However, if automated snapshots are also being created and copied to the management server, it is possible for an automated snapshot to override the manual snapshot.

If this happens, the workaround is to turn off automated snapshots (and their copying) temporarily. To do this, uncheck the box for copying snapshots, set the frequency to zero, and click OK. Then re-open the Edit Snapshots dialog and take the manual snapshot. Once the snapshot is complete and copied to the management server (that is, the manual snapshot appears in the list on the dialog box), you can re-enable copying and automated snapshots.

8.2.

Old versions of Enterprise Manager files are not deleted from the /tmp directory

When the Enterprise Manager starts, it unpacks files that the web server uses into a subfolder of the /tmp directory. It does not delete these files when it stops. Under normal operation, this is not a problem. However, if you upgrade to a new version of the Enterprise Edition, files for the new version become intermixed with the older files and can result in the Enterprise Manager starting databases using the wrong version of VoltDB. To avoid this situation, make sure these temporary files are deleted before starting a new version of VoltDB Enterprise Manager.

The /tmp directory is emptied every time the server reboots. So the simplest workaround is to reboot your management server after you upgrade VoltDB. Alternately, you can delete these temporary files manually by deleting the winstone subfolders in the /tmp directory:

$ rm -vr /tmp/winstone*

8.3.

Enterprise Manager configuration files are not upwardly compatible.

When upgrading VoltDB Enterprise Edition, please note that the configuration files for the Enterprise Manager are not upwardly compatible. New product features may make existing database and/or deployment definitions unusable. It is always a good idea to delete existing configuration information before upgrading. You can delete the configuration files by deleting the ~/.voltdb directory. For example:

$ rm -vr ~/.voltdb

8.4.

Enterprise Manager cannot start two databases on the same server.

In the past, it was possible to run two (or more) databases on a single physical server by defining two logical servers with the same IP address and making the ports for each database unique. However, as a result of internal optimizations introduced in VoltDB 2.7, this technique no longer works when using the Enterprise Manager.

We expect to correct this limitation in a future release. Note that it is still possible to start multiple databases on a single server manually using the VoltDB shell commands.

8.5.

The Enterprise Manager cannot start or manage a replica database for database replication.

Starting with VoltDB 5.1, database replication (DR) has changed and the VoltDB Enterprise Manager can no longer correctly configure, start or manage a replica database. The recommended method is to start the database manually and use the builtin VoltDB Management Center to manage the database by connecting to the cluster nodes directly on the HTTP port (8080 by default).

Implementation Notes

The following notes provide details concerning how certain VoltDB features operate. The behavior is not considered incorrect. However, this information can be important when using specific components of the VoltDB product.

1. VoltDB Management Center

1.1.

Schema updates clear the stored procedure data table in the Management Center Monitor section

Any time the database schema or stored procedures are changed, the data table showing stored procedure statistics at the bottom of the Monitor section of the VOltDB Management Center get reset. As soon as new invocations of the stored procedures occur, the statistics table will show new values based on performance after the schema update. Until invocations occur, the procedure table is blank.

2. SQL

2.1.

You cannot partition a table on a column defined as ASSUMEUNIQUE.

The ASSUMEUNIQUE attribute is designed for identifying columns in partitioned tables where the column values are known to be unique but the table is not partitioned on that column, so VoltDB cannot verify complete uniqueness across the database. Using interactive DDL, you can create a table with a column marked as ASSUMEUNIQUE, but if you try to partition the table on the ASSUMEUNIQUE column, you receive an error. The solution is to drop and add the column using the UNIQUE attribute instead of ASSUMEUNIQUE.

2.2.

Adding or dropping column constraints (UNIQUE or ASSUMEUNIQUE) is not supported by the ALTER TABLE ALTER COLUMN statement.

You cannot add or remove a column constraint such as UNIQUE or ASSUMEUNIQUE using the ALTER TABLE ALTER COLUMN statement. Instead to add or remove such constraints, you must first drop then add the modified column. For example:

ALTER TABLE employee DROP COLUMN empID;
ALTER TABLE employee ADD COLUMN empID INTEGER UNIQUE;

2.3.

Do not use UPDATE to change the value of a partitioning column

For partitioned tables, the value of the column used to partition the table determines what partition the row belongs to. If you use UPDATE to change this value and the new value belongs in a different partition, the UPDATE request will fail and the stored procedure will be rolled back.

Updating the partition column value may or may not cause the record to be repartitioned (depending on the old and new values). However, since you cannot determine if the update will succeed or fail, you should not use UPDATE to change the value of partitioning columns.

The workaround, if you must change the value of the partitioning column, is to use both a DELETE and an INSERT statement to explicitly remove and then re-insert the desired rows.

2.4.

Certain SQL syntax errors result in the error message "user lacks privilege or object not found".

If you refer to a table or column name that does not exist, VoltDB reports that the "user lacks privilege or object not found". This can happen, for example, if you misspell a table or column name.

Another situation where this occurs is if you mistakenly use double quotation marks to enclose a string literal (such as WHERE ColumnA="True"). ANSI SQL requires single quotes for string literals and reserves double quotes for object names. In the preceding example, VoltDB interprets "True" as an object name, cannot resolve it, and issues the "user lacks privilege" error.

The workaround is, if you receive this error, to look for misspelled table or columns names or string literals delimited by double quotes in the offending SQL statement.

2.5.

Ambiguous column references no longer allowed.

Starting with VoltDB 6.0, ambiguous column references are no longer allowed. For example, if both the Customer and Placedorder tables have a column named Address, the reference to Address in the following SELECT statement is ambiguous:

SELECT OrderNumber, Address FROM Customer, Placedorder
   . . .

Previously, VoltDB would select the column from the leftmost table (Customer, in this case). Ambiguous column references are no longer allowed and you must use table prefixes to disambiguate identical column names. For example, specifying the column in the preceding statement as Customer.Address.

A corollary to this change is that a column declared in a USING clause can now be referenced using a prefix. For example, the following statement uses the prefix Customer.Address to disambiguate the column selection from a possibly similarly named column belonging to the Supplier table:

SELECT OrderNumber, Vendor, Customer.Address
   FROM Customer, Placedorder Using (Address), Supplier
    . . .
3. Runtime

3.1.

File Descriptor Limits

VoltDB opens a file descriptor for every client connection to the database. In normal operation, this use of file descriptors is transparent to the user. However, if there are an inordinate number of concurrent client connections, or clients open and close many connections in rapid succession, it is possible for VoltDB to exceed the process limit on file descriptors. When this happens, new connections may be rejected or other disk-based activities (such as snapshotting) may be disrupted.

In environments where there are likely to be an extremely large number of connections, you should consider increasing the operating system's per-process limit on file descriptors.