Release Notes

Documentation

VoltDB Home » Documentation » Release Notes

Release Notes


Product

VoltDB

Version

5.2.12

Release Date

March 9, 2018

This document provides information about known issues and limitations to the current release of VoltDB. If you encounter any problems not listed below, please be sure to report them to [email protected]. Thank you.

Upgrading From Older Versions

For customers upgrading from pre-5.0 releases of VoltDB, please see the V4.0 Upgrade Notes for special considerations when upgrading from previous major versions. Otherwise, the process for upgrading from a previous version of VoltDB is as follows:

  1. Place the database in admin mode (using voltadmin pause).

  2. Perform a manual snapshot of the database (using voltadmin save).

  3. Shutdown the database (using voltadmin shutdown).

  4. Upgrade the VoltDB software.

  5. Restart the database (using the voltdb create action).

  6. Reload any Java stored procedures and the database schema (using the sqlcmd directives load classes and file).

  7. Restore the snapshot created in Step #2 (using voltadmin restore).

  8. Return the database to normal operations (using voltadmin resume).

Changes Since the Last Release

Users of previous versions of VoltDB should take note of the following changes that might impact their existing applications.

1. Release V5.2.12 (March 9, 2018)

1.1.

Recent improvements

The following limitations in previous versions have been resolved:

  • There was an issue where, under certain circumstances, recovering a database from command logs could stall, resulting in the recovery process never completing. When this occurred, the log might contain a warning indicating that a Java thread is "parking" waiting for a condition to clear. This problem has been fixed.

  • If a VoltDB cluster does not receive a message from a node within a certain period of time, the node is dropped from the cluster. This is called the heartbeat timeout. However, previously if the heartbeat timeout was configured to 10 seconds or less, there was no warning before the node was dropped. Now one or more warnings are issued as the timeout approaches.

  • There was an issue where, if during the initiation of database replication (DR) the initial synchronization snapshot was started but the receiving cluster failed and restarted before that snapshot completed, all future snapshots on the master cluster were blocked. This included command log snapshots, resulting in the command logs never getting truncated. This issue has been resolved.

2. Release V5.2.11 (August 28, 2016)

The following issue is fixed in this release.

2.1.

Eliminate delay when promoting a replica cluster

Previously, promoting a replica cluster to a fully operational read/write database could require a noticeable delay. In the worst case, the promote operation could take minutes to complete. This delay has been eliminated and the promote action should now complete within a second or two.

3. Release V5.2.10 (August 25, 2016)

3.1.

Recent improvements

The following limitations in previous versions have been resolved:

  • There was an edge case when using database replication (DR) where if, after DR started, a K-safe cluster stopped and recovered, then one node failed and rejoined, and finally another node on the cluster stopped, the first node would also stop. This issue has been resolved.

  • Previously, an UPSERT statement specifying a subset of columns would update the values of all columns in the table row. This issue has been corrected.

4. Release V5.2.9 (March 10, 2016)

4.1.

Recent improvements

The following limitations in previous versions have been resolved:

  • There was a condition where if, after using database replication (DR) and one of the clusters stops and recovers, the cluster could fail with a ConcurrentModificationException exception. This condition was caused by the partitions used for DR changing while the cluster was down and the partition mapping from one cluster to the other being out of sync. This issue has been resolved.

  • In rare cases, a master database could report a null pointer exception and stall replication if the consumer cluster encountered a change in its topology (that is, a node failed or rejoined). A common symptom before the master cluster stalled was that it reported a series of informational messages that DR was "discarding ctrl message". This issue has been resolved.

5. Release V5.2.8 (December 10, 2015)

The following issue is fixed in this release.

5.1.

COUNT(DISTINCT column ) of an inline VARCHAR column could result in an error, node failure, or incorrect answer

Previously, if a query attempts to return a distinct count of an inline VARCHAR column, it could result in either an error that caused the database process to stop or an incorrect result. VARCHAR columns are stored inline if they are declared as having less than 64 bytes; such as VARCHAR(63 BYTES) or VARCHAR(15). This problem has been corrected.

6. Release V5.2.7 (October 20, 2015)

6.1.

Rejoin failure due to clock skew causes subsequent rejoins and snapshots to fail

Previously, if a rejoin operation failed due to the difference in clock time between nodes exceeding the allowable limit (100 milliseconds), any subsequent attempt to rejoin a node or generate a snapshot would fail as well. This problem has been corrected.

7. Release V5.2.6 (August 28, 2015)

7.1.

Bug Fixes

  • Previously the VoltDB Management Center SQL Query tab would not display null values properly for floating-point columns where Null is the default value. This problem has been fixed.

  • Previously, using the INSERT INTO... FROM statement to export data from a regular table into an export table could generate an array out of bounds error during the operation if the export table contained a TIMESTAMP column, resulting in not all rows being exported. This problem has been fixed.

8. Release V5.2.5 (August 5, 2015)

8.1.

Unable to recover DR replica

There was an issue where, under certain conditions, attempting to recover a promoted replica from command logs would fail, reporting the fatal error "Haven't implemented more than one DC yet". This problem is now fixed.

9. Release V5.2.4 (June 26, 2015)

9.1.

Improved memory management for snapshot processing

Changes have been made to improve memory utilization during snapshot processing.

9.2.

Race condition in DR buffer management

There was a race condition associated with database replication (DR) buffer management that could, in rare cases, cause a segmentation fault. This problem has been fixed.

10. Release V5.2.3 (June 9, 2015)

10.1.

Change to VoltDB overflow file handling

The handling of disk I/O for overflow data, such as database replication (DR) and export, has been changed to use standard Java non-blocking file I/O. This change allows VoltDB to better catch and report unusual conditions such as when a disk becomes full or goes offline unexpectedly.

11. Release V5.2.2 (May 15 2015)

11.1.

Excessive memory use when overflowing queued export or DR data corrected

There was in issue in earlier releases where, if the target of export or database replication (DR) stalled, the sending cluster buffers queued data to disk. However, this did not properly free the associated memory; and so memory usage would increase. It was possible, if the service buffered data to disk for an extended period of time, that the server process could run out of memory.

This issue has been resolved and memory associated with data buffered to disk is released appropriately. Note however, that even if excessive memory usage is no longer a problem, you should always try to resolve issues with stalled downstream systems when using export or DR because buffered data could eventually exceed disk storage capacity.

11.2.

New C++ client supports SHA-256 hashing

The C++ client library has been updated to support SHA-256 hashing of passwords when authenticating to servers with security enabled. By default, the client supports past and present server versions by using SHA-1 hashing. However, when connecting to VoltDB 5.2 and later servers, you can use SHA-256 hashing by specifying the hash type in the client configuration. For example:

voltdb::ClientConfig config("myusername", 
                            "mypassword", 
                            voltdb::HASH_SHA256);
voltdb::Client client = voltdb::Client::create(config);

Now both the Java and C++ client libraries support SHA-256 hashing. The new C++ client is available from the VoltDB client downloads page.

12. Release V5.2.1 (April 30, 2015)

12.1.

Support for SHA-2 in the voltdb mask command

VoltDB 5.2 introduced use of SHA-2 hashing for authentication. This release brings the voltdb mask function up to date with the new authentication scheme. For customers using the mask function, be sure to re-hash your deployment file using the 5.2.1 voltdb mask command and use the newly hashed deployment file when starting the database to ensure all command utilities can authenticate properly.

13. Release V5.2 (April 29, 2015)

13.1.

Ability for database replication (DR) to resume across cluster outages

Previously, database replication (DR) was able to continue despite individual node failures (in a K-safe environment). However failure of either the master or replica cluster would force a restart of DR. Beginning with 5.2, DR can resume across cluster failures when either the master or replica is recovered from command logs. See the chapter on "Database Replication" in the Using VoltDB manual for details.

13.2.

Secure export to Hadoop using Kerberos

The HTTP export connector now supports the use of Kerberos authentication when exporting to a WebHDFS endpoint that is configured to use Kerberos. See the section on using the HTTP export connector in the Using VoltDB manual for details.

13.3.

Support for partial indexes

VoltDB now supports partial indexes. That is, the index definition can contain a WHERE clause limiting the rows that are included in the index. For example:

CREATE INDEX completed_tasks 
    ON tasks (task_id, startdate, enddate)
    WHERE enddate IS NOT NULL;

For the initial release of partial indexes, there are certain limitations on when and where such indexes and the tables associated with them can be modified. For now, you cannot use the ALTER TABLE statement to modify a table with a partial index. This limitation is expected to be relaxed in a future release.

13.4.

New VoltDB Management Center features

The Management Center, VoltDB's web-based management console, continues to be extended and improved. This release contains two major new features:

  • New Idle Time graph — The Monitor tab contains a new graph, the partition idle time graph, which shows the amount of work being done by each partition on the current server. The graph plots the percentage of time each partition is idle. 100% indicates the partition is doing no work (processing no stored procedures or ad hoc queries); 0% indicates the partition is constantly in use. The graph also includes lines for the local multi-partition coordinator and the minimum and maximum idle time for the cluster as a whole.

  • Ability to change configuration settings in the Admin tab — The Admin tab now lets you change deployment settings that are configurable at runtime, including export properties, automated snapshots, security, and selected system settings. Click on the pencil icon next to a property to edit it. Note that if security is enabled, only users with the ADMIN permission are allowed to view and edit the Admin tab settings.

13.5.

New bitwise functions

VoltDB now supports several new functions for performing bitwise operations on BIGINT values. The new functions support standard binary operands such as AND, OR, XOR, and NOT as well as bit shifting operations. See the reference pages for the BITAND(), BITNOT(), BITOR(), BITXOR(), BIT_SHIFT_LEFT(), and BIT_SHIFT_RIGHT() functions in the Using VoltDB manual for details.

13.6.

New HEX() function

Another new function, HEX(), converts a BIGINT value into its hexadecimal representation as a string. See the reference page for the HEX() function in the Using VoltDB manual for details.

13.7.

Support for SHA-2

VoltDB now supports SHA-2 hashing of credentials between the Java and c++ client libraries and the server. When you pass a username and password, the updated client library uses a SHA-2 hash of the credentials. On the server side, the VoltDB server accepts both SHA-1 (sent by previous versions of the client) and SHA-2. So both current and previous versions of the client libraries continue to work with the latest server release.

13.8.

New voltadmin command to stop individual servers

The VoltDB command line utility, voltadmin, now supports the stop command. The voltadmin stop command stops the VoltDB server process on the specified node. Note that the stop command can only be used on a K-safe cluster and will not intentionally shutdown the database. That is, the command will only stop a node if there are enough nodes left for the cluster to remain viable.

13.9.

Additional improvements

In addition to the new features and capabilities described above, the following limitations in previous versions have been resolved:

  • In earlier releases, the Java client library interpreted the default call timeout value incorrectly, resulting in a much longer timeout period than the expected two minutes. This problem has been corrected. It should be noted, however, that long-running procedures that completed in previous versions, may have the client timeout the call now that the default timeout period is being properly enforced. (The procedure itself may or may not complete, but the client application will not know since it no longer listens for a response after the timeout.)

  • Previously, database replication (DR) ignored the setting for the external interface on the command line (that is, the --externalinterface flag). This issue has been corrected and now the interface used for the replication port comes from, in order of priority, A.) the interface specified using the --replication command line flag, B.) the interface specified using the --externalinterface command line flag, or if neither of the preceding are specified, C.) all available interfaces.

14. Release V5.1.2 (April 16, 2015)

VoltDB 5.1.2 is a patch release that provides performance and stability improvements for Database Replication (DR).

14.1.

Improved performance for initial DR snapshot.

When database replication starts, a snapshot is sent from the master database to the replica. In this release several I/O improvements have been made to improve the performance and reliability of the initial DR snapshot.

14.2.

Improved management of DR buffers

There was an issue with how multiple buffers were grouped and managed in DR, which could result in decreased replication throughput. This issue has been resolved.

15. Release V5.1.1 (April 12, 2015)

VoltDB 5.1.1 is a patch release that fixes an issue introduced in 5.1.

15.1.

Bug fix: Excessive CPU usage on idle database

Changes in VoltDB 5.1 introduced a process that, when the database was idle, would "spin" making it appear that the database was consuming significant CPU cycles. When the database was active processing queries, the CPU usage would drop to normal levels.

Although not dangerous, this behavior was misleading and is corrected by the current update.

16. Release V5.1 (March 22, 2015)

VoltDB 5.1 introduces several significant new features and enhancements. Existing customers should pay close attention to the following notes to see what if any changes they may want to make to their applications and/or operations to take advantage of the new capabilities.

16.1.

New implementation of Database Replication (DR)

Database Replication (DR) lets you automatically copy updates to database tables from one database (the master) to another (the replica). Starting with VoltDB 5.1, DR has been rewritten to remove any single point of failure, improve performance, and allow new capabilities in the future. New features include:

  • Significantly better performance — rather than a single replication stream, DR now occurs between multiple partitions simultaneously. Also, the new DR uses binary logs of transaction results saving the replica from having to replay the transaction.

  • No single point of failure — By eliminating the DR agent the new DR not only removes a single point of failure, it simplifies DR from an operational perspective.

  • More flexibility — You can now specify which tables you want to replicate rather than having to replicate the entire database. Of course, you can always choose to replicate all of the tables if you like.

For existing DR customers, the new capabilities and the elimination of the DR agent do necessitate some operational changes. Specifically, you must now:

  • Identify the tables participating in DR using the DR TABLE statement in the schema.

  • Configure DR in the deployment files for both the master and the replica clusters.

See the chapter on Database Replication in the Using VoltDB manual for details.

16.2.

Ability to export data to multiple streams

VoltDB now allows you to export data to more than one target at a time. By assigning export tables to individual streams and then configuring each stream separately in the deployment file, you can export data to multiple targets simultaneously. For example, you might export deduped sensor data to Hadoop once it has been processed and export alerts regarding unusual events to HTTP for distribution via SMS, email, or other notification service. See the chapter on exporting live data in the Using VoltDB manual for details.

Use of multiple streams does require additional information in the schema and the deployment file. For example, the EXPORT TABLE statement now requires a TO STREAM clause so you can specify the stream to which each export table is directed. However, for backwards compatibility, the old syntax is still supported temporarily to allow customers time to migrate existing applications at their convenience.

16.3.

Batch processing of interactive DDL statements

VoltDB 5.0 introduced interactive DDL, eliminating the need for a precompiled application catalog. However, large schema could take a significant time to process interactively. VoltDB 5.1 solves this problem by allowing you to batch DDL statements. If you have your DDL statements in a single file, you can use the file --batch directive in sqlcmd to batch process the DDL statements. For example:

$ sqlcmd
1> file --batch myschema.sql;

If you have a mix of DDL (data definition language) statements and DML (data manipulation language) and directives you can batch process only the DDL statements by enclosing them in a file --inlinebatch directive and the specified end marker. For example:

load classes myprocs.jar;
file --inlinebatch END_OF_BATCH
CREATE PROCEDURE FROM CLASS procs.AddEmp;
CREATE PROCEDURE FROM CLASS procs.ChangeDept;
PARTITION PROCEDURE AddEmp ON TABLE emp COLUMN empid;
PARTITION PROCEDURE ChangeDept ON TABLE emp COLUMN empid;
END_OF_BATCH

Batch processing DDL statements can speed up the processing of those statements by a factor of 10 or more, depending on the number and complexity of the statements and the size of the cluster.

16.4.

New Administrative features in VoltDB Management Center

VoltDB Management Center, the web-based console for managing and monitoring a VoltDB database, now has a tab for administrative functions. On the Admin tab, you can pause and resume the database, save and restore snapshots, as well as review and update the database configuration. If security is enabled for the database, only users with the ADMIN permission can see and use the Admin tab in the Management Center.

16.5.

Additional improvements

In addition to the new features and capabilities described above, the following limitations in previous versions have been resolved:

  • Previously, if a table containing data was changed to an export table, the data was lost. VoltDB no longer allows you to issue an EXPORT TABLE statement on a table with existing data.

  • In release 5.0, reconfiguring export on a running database while dropping and adding export tables could cause unpredictable export results. This issue is resolved. Export tables can be modified and export streams can be reconfigured on a running database with predictable results.

  • There was an issue with the sqlcmd command line utility where it could occasionally hang during startup and never reach the input prompt. This problem has been resolved.

  • The HTTP port and JSON interface are now enabled by default. Previously, this port was enabled in the default deployment file, but not if you used a custom deployment file. If you want to disable the HTTP port or JSON capability, you must explicitly disable them in the deployment file. For example:

    <httpd enabled="false">
        <jsonapi enabled="false"/>
    </httpd>
  • There was an issue where command logs could not be recovered if the logs included a catalog update using a catalog from a previous VoltDB version. This problem is now fixed. However, recompiling your existing catalogs is recommended whenever you update your VoltDB version.

  • Previously, there was an issue with @UpdateLogging, where spurious startup messages could flood the logs beginning the day after invoking the system procedure to update the Log4J configuration. This problem is now fixed.

  • There was an planning error where a UNION statement within a subquery could result in a null pointer exception when the statement was compiled. This problem is now fixed.

  • Another planning error resulted when compiling a SQL statement, such as SELECT, where the selection expression listed the same column twice. This problem is now fixed.

17. Release V5.0.2 (February 16, 2015)

This release contains no new features but corrects the following issues from the original 5.0 release.

17.1.

Issues related to using INSERT INTO SELECT with export tables

There was an issue in earlier releases where using an INSERT INTO SELECT statement with an export table as the target for the insert either generated a null pointer exception or did not insert the expected data into the export stream. The issue only applies to INSERT INTO SELECT as an ad hoc query or within a multi-partitioned stored procedure.

  • If the target of an INSERT INTO SELECT statement in a multi-partitioned query is an export table that is not partitioned, the planner would throw a null pointer exception (NPE).

  • If the source of the INSERT INTO SELECT statement in a multi-partitioned query (that is, the table in the SELECT subquery) is a partitioned table, then an insert into an export table may not insert all of the selected rows.

These issues have now been corrected.

17.2.

Database failure when reporting long-running queries

There was an issue in previous versions (starting with VoltDB 4.8), where if a query runs for a significant amount of time, VoltDB attempts to log a warning. However, the warning generates an error (index out of bounds) and stops the database.

This issue is now fixed.

17.3.

Lines starting with "file" in sqlcmd incorrectly interpreted as a file directive.

In the original 5.0 release, any sqlcmd input line beginning with "file" (regardless of upper or lowercase) was interpreted as a file directive, even in the middle of a multi-line statement. This would happen, for example, if a CREATE TABLE statement included a column name starting with "file":

CREATE TABLE archive (
   ID INTEGER,
   Directory VARCHAR(128),
   Filename VARCHAR(128)
);

This usually resulted in several errors and the intended statement not being interpreted correctly. This issue is now fixed.

18. Release V5.0 (January 28, 2015)

18.1.

Interactive DDL

The major new feature in VoltDB 5.0 is the ability to enter data definition language (DDL) statements interactively. For example, using sqlcmd on the command line or the VoltDB Management Center SQL Query interface. This makes the process of creating a database and defining the schema more flexible. As part of the support for interactive DDL, the following features have been added:

  • Support for the DROP and ALTER statements for removing and modifying existing schema objects

  • The ability to combine the CREATE PROCEDURE and PARTITION PROCEDURE statements into a single CREATE PROCEDURE statement with a PARTITION ON clause

  • A new system procedure, @UpdateClasses, for adding and removing classes

  • Two corresponding sqlcmd directives, load classes and remove classes, perform this function from the command line

Pleased note that processing DDL interactively can take longer than compiling an application catalog all at once. This is most noticeable when processing a large schema and especially on a multi-node cluster (where each change must be coordinated among the servers).

If you find entering DDL interactively too slow, it is possible to revert to precompiling the schema before starting the database. You have two choices:

  • You can return to using catalogs exclusively, by setting the schema="catalog" attribute in the deployment file.

  • You can compile the initial schema as a catalog, start the database specifying the catalog on the voltdb create command, but leave the deployment file unchanged. In this case, the database starts from the catalog, but you can use interactive DDL to modify the schema and stored procedures once the database is running.

Performance improvements for processing large schemas interactively are expected in upcoming releases.

18.2.

Ability to "trim" rows using LIMIT PARTITION ROWS EXECUTE

The LIMIT PARTITION ROWS constraint now supports an EXECUTE clause that lets you specify a DELETE statement that is executed when the constraint value is exceeded. The EXECUTE clause gives you the ability to automatically "prune" older data when the constraint is reached. See the description of the CREATE TABLE statement in the Using VoltDB manual for details.

18.3.

Support for HttpFS targets in Hadoop export

The HTTP connector, now supports Apache HttpFS (Hadoop HDFS over HTTP) servers as a target when exporting using the WebHDFS protocol. Set the export property httpfs.enable to "true" when exporting to HttpFS servers.

18.4.

Addition of the ORDER BY clause to the DELETE statement

It is now possible to use the ORDER BY clause with LIMIT and/or OFFSET when performing a DELETE operation. ORDER BY allows you to more selectively remove database rows. For example, the following DELETE query removes the five oldest records, based on a timestamp column:

DELETE FROM events ORDER BY event_time ASC LIMIT 5;

Note that DELETE queries that include the ORDER BY clause must be single-partitioned and the ORDER BY clause must be deterministic. See the description of the DELETE statement in the Using VoltDB manual for details.

18.5.

Bug fixes

In addition to the new features listed above, VoltDB V5.0 includes fixes to several known issues:

  • Previously, there was an undocumented limit of 200 kilobytes to the size of the parameter list on the JSON interface. This limit has been extended to 2 megabytes.

Known Limitations

The following are known limitations to the current release of VoltDB. Workarounds are suggested where applicable. However, it is important to note that these limitations are considered temporary and are likely to be corrected in future releases of the product.

1. Command Logging

1.1.

Changing the deployment configuration when recovering command logs, can result in unexpected settings.

There is an issue where, if the command log contains schema changes (performed through interactive DDL statements, voltadmin update, or @UpdateApplicationCatalog), when the command logs are recovered, the previous deployment file settings are used, even if an alternate deployment file is specified on the voltdb recover command line. Then, after recovering the database, a new schema update can result in the deployment settings specified on the command line taking affect.

Until this issue is resolved, the safest workaround to ensure the desired configuration is achieved is to perform the voltdb recover operation without modifying the current deployment file, then make deployment changes with the voltadmin update command after the database has started.

1.2.

Command logs can only be recovered to a cluster of the same size.

To ensure complete and accurate restoration of a database, recovery using command logs can only be performed to a cluster with the same number of unique partitions as the cluster that created the logs. If you restart and recover to the same cluster with the same deployment options, there is no problem. But if you change the deployment options for number of nodes, sites per host, or K-safety, recovery may not be possible.

For example, if a four node cluster is running with four sites per host and a K-safety value of one, the cluster has two copies of eight unique partitions (4 X 4 / 2). If one server fails, you cannot recover the command logs from the original cluster to a new cluster made up of the remaining three nodes, because the new cluster only has six unique partitions (3 X 4 / 2). You must either replace the failed server to reinstate the original hardware configuration or otherwise change the deployment options to match the number of unique partitions. (For example, increasing the site per host to eight and K-safety to two.)

1.3.

Do not use the subfolder name "segments" for the command log snapshot directory.

VoltDB reserves the subfolder "segments" under the command log directory for storing the actual command log files. Do not add, remove, or modify any files in this directory. In particular, do not set the command log snapshot directory to a subfolder "segments" of the command log directory, or else the server will hang on startup.

2. Database Replication

2.1.

Some DR data may not be delivered if master database nodes fail and rejoin in rapid succession.

Because DR data is buffered on the master database and then delivered asynchronously to the replica, there is always the danger that data does not reach the replica if a master node stops. This situation is mitigated in a K-safe environment by all copies of a partition buffering on the master cluster. Then if a sending node goes down, another node on the master database can take over sending logs to the replica. However, if multiple nodes go down and rejoin in rapid succession, it is possible that some buffered DR data — from transactions when one or more nodes were down — could be lost when another node with the last copy of that buffer also goes down.

If this occurs and the replica recognizes that some binary logs are missing, DR stops and must be restarted.

To avoid this situation, especially when cycling through nodes for maintenance purposes, the key is to ensure that all buffered DR data is transmitted before stopping the next node in the cycle. You can do this using the @Statistics system procedure to make sure the last ACKed timestamp (using @Statistitcs DR on the master cluster) is later than the timestamp when the previous node completed its rejoin operation.

3. Export

3.1.

Synchronous export in Kafka can use up all available file descriptors and crash the database.

A bug in the Apache Kafka client can result in file descriptors being allocated but not released if the producer.type attribute is set to "sync" (which is the default). The consequence is that the system eventually runs out of file descriptors and the VoltDB server process will crash.

Until this bug is fixed, use of synchronous Kafka export is not recommended. The workaround is to set the Kafka producer.type attribute to "async" using the VoltDB export properties.

4. SQL and Stored Procedures

4.1.

Comments containing unmatched single quotes in multi-line statements can produce unexpected results.

When entering a multi-line statement at the sqlcmd prompt, if a line ends in a comment (indicated by two hyphens) and the comment contains an unmatched single quote character, the following lines of input are not interpreted correctly. Specifically, the comment is incorrectly interpreted as continuing until the next single quote character or a closing semi-colon is read. This is most likely to happen when reading in a schema file containing comments. This issue is specific to the sqlcmd utility.

A fix for this condition is planned for an upcoming point release

4.2.

Do not use assertions in VoltDB stored procedures.

VoltDB currently intercepts assertions as part of its handling of stored procedures. Attempts to use assertions in stored procedures for debugging or to find programmatic errors will not work as expected.

4.3.

The UPPER() and LOWER() functions currently convert ASCII characters only.

The UPPER() and LOWER() functions return a string converted to all uppercase or all lowercase letters, respectively. However, for the initial release, these functions only operate on characters in the ASCII character set. Other case-sensitive UTF-8 characters in the string are returned unchanged. Support for all case-sensitive UTF-8 characters will be included in a future release.

5. Client Interfaces

5.1.

Avoid using decimal datatypes with the C++ client interface on 32-bit platforms.

There is a problem with how the math library used to build the C++ client library handles large decimal values on 32-bit operating systems. As a result, the C++ library cannot serialize and pass Decimal datatypes reliably on these systems.

Note that the C++ client interface can send and receive Decimal values properly on 64-bit platforms.

6. Enterprise Manager

Important

The VoltDB Enterprise Manager is part of the VoltDB Enterprise Edition and continues to be supported for customers who are currently using it. However, due to limitations in its implementation, no further development work is being done on the Enterprise Manager and it is not recommended for new deployments. The Enterprise Manager's functionality will be replaced by new, more robust, deployment and management capabilities in the future.

6.1.

Manual snapshots not copied to the Management Server properly.

Normally, manual snapshots (those created with the Take a Snapshot button) are copied to the management server. However, if automated snapshots are also being created and copied to the management server, it is possible for an automated snapshot to override the manual snapshot.

If this happens, the workaround is to turn off automated snapshots (and their copying) temporarily. To do this, uncheck the box for copying snapshots, set the frequency to zero, and click OK. Then re-open the Edit Snapshots dialog and take the manual snapshot. Once the snapshot is complete and copied to the management server (that is, the manual snapshot appears in the list on the dialog box), you can re-enable copying and automated snapshots.

6.2.

Old versions of Enterprise Manager files are not deleted from the /tmp directory

When the Enterprise Manager starts, it unpacks files that the web server uses into a subfolder of the /tmp directory. It does not delete these files when it stops. Under normal operation, this is not a problem. However, if you upgrade to a new version of the Enterprise Edition, files for the new version become intermixed with the older files and can result in the Enterprise Manager starting databases using the wrong version of VoltDB. To avoid this situation, make sure these temporary files are deleted before starting a new version of VoltDB Enterprise Manager.

The /tmp directory is emptied every time the server reboots. So the simplest workaround is to reboot your management server after you upgrade VoltDB. Alternately, you can delete these temporary files manually by deleting the winstone subfolders in the /tmp directory:

$ rm -vr /tmp/winstone*

6.3.

Enterprise Manager configuration files are not upwardly compatible.

When upgrading VoltDB Enterprise Edition, please note that the configuration files for the Enterprise Manager are not upwardly compatible. New product features may make existing database and/or deployment definitions unusable. It is always a good idea to delete existing configuration information before upgrading. You can delete the configuration files by deleting the ~/.voltdb directory. For example:

$ rm -vr ~/.voltdb

6.4.

Enterprise Manager cannot start two databases on the same server.

In the past, it was possible to run two (or more) databases on a single physical server by defining two logical servers with the same IP address and making the ports for each database unique. However, as a result of internal optimizations introduced in VoltDB 2.7, this technique no longer works when using the Enterprise Manager.

We expect to correct this limitation in a future release. Note that it is still possible to start multiple databases on a single server manually using the VoltDB shell commands.

6.5.

The Enterprise Manager cannot start or manage a replica database for database replication.

Starting with VoltDB 5.1, database replication (DR) has changed and the VoltDB Enterprise Manager can no longer correctly configure, start or manage a replica database. The recommended method is to start the database manually and use the builtin VoltDB Management Center to manage the database by connecting to the cluster nodes directly on the HTTP port (8080 by default).

Implementation Notes

The following notes provide details concerning how certain VoltDB features operate. The behavior is not considered incorrect. However, this information can be important when using specific components of the VoltDB product.

1. VoltDB Management Center

1.1.

Schema updates clear the stored procedure data table in the Management Center Monitor section

Any time the database schema or stored procedures are changed, the data table showing stored procedure statistics at the bottom of the Monitor section of the VOltDB Management Center get reset. As soon as new invocations of the stored procedures occur, the statistics table will show new values based on performance after the schema update. Until invocations occur, the procedure table is blank.

2. SQL

2.1.

You cannot partition a table on a column defined as ASSUMEUNIQUE.

The ASSUMEUNIQUE attribute is designed for identifying columns in partitioned tables where the column values are known to be unique but the table is not partitioned on that column, so VoltDB cannot verify complete uniqueness across the database. Using interactive DDL, you can create a table with a column marked as ASSUMEUNIQUE, but if you try to partition the table on the ASSUMEUNIQUE column, you receive an error. The solution is to drop and add the column using the UNIQUE attribute instead of ASSUMEUNIQUE.

2.2.

Adding or dropping column constraints (UNIQUE or ASSUMEUNIQUE) is not supported by the ALTER TABLE ALTER COLUMN statement.

You cannot add or remove a column constraint such as UNIQUE or ASSUMEUNIQUE using the ALTER TABLE ALTER COLUMN statement. Instead to add or remove such constraints, you must first drop then add the modified column. For example:

ALTER TABLE employee DROP COLUMN empID;
ALTER TABLE employee ADD COLUMN empID INTEGER UNIQUE;

2.3.

Do not use UPDATE to change the value of a partitioning column

For partitioned tables, the value of the column used to partition the table determines what partition the row belongs to. If you use UPDATE to change this value and the new value belongs in a different partition, the UPDATE request will fail and the stored procedure will be rolled back.

Updating the partition column value may or may not cause the record to be repartitioned (depending on the old and new values). However, since you cannot determine if the update will succeed or fail, you should not use UPDATE to change the value of partitioning columns.

The workaround, if you must change the value of the partitioning column, is to use both a DELETE and an INSERT statement to explicitly remove and then re-insert the desired rows.

2.4.

Certain SQL syntax errors result in the error message "user lacks privilege or object not found" when compiling the runtime catalog.

If you refer to a table or column name that does not exist, the VoltDB compiler issues the error message "user lacks privilege or object not found". This can happen, for example, if you misspell a table or column name.

Another situation where this occurs is if you mistakenly use double quotation marks to enclose a string literal (such as WHERE ColumnA="True"). ANSI SQL requires single quotes for string literals and reserves double quotes for object names. In the preceding example, VoltDB interprets "True" as an object name, cannot resolve it, and issues the "user lacks privilege" error.

The workaround is, if you receive this error, to look for misspelled table or columns names or string literals delimited by double quotes in the offending SQL statement.

3. Runtime

3.1.

File Descriptor Limits

VoltDB opens a file descriptor for every client connection to the database. In normal operation, this use of file descriptors is transparent to the user. However, if there are an inordinate number of concurrent client connections, or clients open and close many connections in rapid succession, it is possible for VoltDB to exceed the process limit on file descriptors. When this happens, new connections may be rejected or other disk-based activities (such as snapshotting) may be disrupted.

In environments where there are likely to be an extremely large number of connections, you should consider increasing the operating system's per-process limit on file descriptors.

3.2.

Protecting VoltDB Against Port Scanners

VoltDB uses a number of different ports for interprocess communication as well as features such as HTTP access, DR, and so on. Port scanning software often interferes with normal operation of such ports by sending bogus data to them in an attempt to identify open ports.

VoltDB has hardened its port usage to ignore unexpected or irrelevant data from port scanners. However, the ports used for Database Replication (DR) cannot be protected in this way. So, in V4.6, a Java property was introduced to allow you to disable the DR ports, for situations where port scanning cannot be avoided. To disable the DR ports, set the Java property VOLTDB_DISABLE_DR to true before starting the database process. For example:

$ export VOLTDB_OPTS="-DVOLTDB_DISABLE_DR=true"
$ voltdb create myapplication.jar \
                --deployment=deployment.xml \
                --host=voltsvr1

Note that, if you disable the DR ports, you cannot use the database as a master for database replication.