Release Notes

Documentation

VoltDB Home » Documentation » Release Notes

Release Notes


Product

VoltDB

Version

6.6.8

Release Date

Novermber 22, 2017

This document provides information about known issues and limitations to the current release of VoltDB. If you encounter any problems not listed below, please be sure to report them to [email protected]. Thank you.

Upgrading From Older Versions

The process for upgrading from a previous version of VoltDB is as follows:

  1. Place the database in admin mode (using voltadmin pause).

  2. Perform a manual snapshot of the database (using voltadmin save --blocking).

  3. Shutdown the database (using voltadmin shutdown).

  4. Upgrade the VoltDB software.

  5. Initialize a new database root directory (using the voltdb init --force action).

  6. Start the database in admin mode (using the voltdb start --pause action).

  7. Restore the snapshot created in Step #2 (using voltadmin restore).

  8. Return the database to normal operations (using voltadmin resume).

For customers upgrading from version 5.9 or later, it is possible to minimize the downtime required for the upgrade by performing a "hot" upgrade across clusters using database replication (DR). See the section on upgrading across clusters in the VoltDB Administrator's Guide for more information.

For customers upgrading from V5.x or earlier releases of VoltDB, please see the V5.0 Upgrade Notes.

For customers upgrading from V4.x or earlier releases of VoltDB, please see the V4.0 Upgrade Notes.

Changes Since the Last Release

Users of previous versions of VoltDB should take note of the following changes that might impact their existing applications.

1. Release V6.6.8 (Novermber 22, 2017)

1.1.

Recent improvements

The following limitations in previous versions have been resolved:

  • Previously, attempts to write a small (63 bytes or less) VARBINARY value into a larger (64 bytes or more) VARCHAR column could crash the database due to an access violation or segmentation fault. For example, performing an INSERT INTO SELECT where the target column is defined as VARCHAR(64 BYTES) and the source column is VARBINARY(62). This issue has been resolved.

  • There was a race condition that could, on very rare occasions, be triggered by a schema change while a bulkloader (such as csvloader, jdbcloader, etc.) was running. The symptom of the race condition is that the bulkloader would report a hash mismatch and shut down the database. This issue has been resolved.

  • Previously, if a database continually failed to write snapshots due to lack of disk space, ultimately the snapshot process would hang and no further snapshots were written, even if sufficient disk space was freed up. This issue has been resolved.

  • The VoltDB DECIMAL datatype is a 16-byte fixed scale decimal value. Previously, the csvloader utility did not properly handle strings representing values that exceeded this limit in terms of scale or precision. Instead, it threw an exception, resulting in the error "failed to drain all buffers, some tuples may not be inserted yet." This issue has been resolved.

  • There was an issue introduced in the previous release that could cause unexpected errors or, possibly, crash the database. The problem was limited to the combination of several conditions: a SELECT expression including COUNT(*), where the selection is filtered on a less-than (<) or less-than or equal-to (<=) comparison of a string value to a column, where the string value was longer than the column's maximum size, the column was indexed, and only certain index definitions applied. This issue has been resolved.

2. Release V6.6.7 (June 23, 2017)

2.1.

Recent improvements

The following limitations in previous versions have been resolved:

  • Previously, updating the stored procedure classes (using the sqlcmd LOAD CLASSES directive or the @UpdateClasses system procedure) updated all of the classes in the schema. As a result, the more classes in the schema, the longer the update would take, even if the update itself was small. VoltDB now updates only those classes affected by the change, significantly reducing the time required for the update to complete.

  • VoltDB does not permit schema changes while a node is rejoining the cluster or a new node is being added elastically. However, there was a race condition where, if a rejoin and a schema change were initiated at approximately the same time, it was possible for both to occur simultaneously. The consequence was that the rejoining node would receive the old rather than the updated schema. Once the rejoin operation was complete, the different schema would result in a hash mismatch error, stopping the database. This rare condition has been corrected.

  • Previously, the JSON API returned the wrong error status for invalid URLs and lack of permission. It now returns error codes 404 and 401, respectively.

  • There was a very rare condition where a hash collision in the planner could cause it to select the wrong SQL plan on one node. Although still extremely rare, the chances of a collision were increased with frequent schema and procedure changes. If this happened, the database would detect a hash mismatch on the transaction, shutting down the database to preserve data integrity. This issue has been resolved.

  • There was an edge case, where if a table had a VARCHAR or VARBINARY column that was indexed as part of a complex index and the column was also used in an MIN or MAX aggregate function of a view, updating that column could result in a fatal SIGSEGV error. This issue has been resolved.

  • VoltDB now provides more thorough logging of security events. Specifically, every authorization failure is logged, each user who successfully authenticates is logged, and every configuration change is logged prior to the change being made.

  • Previously, a cluster could hang if, while a node was rejoining and nearly finished, another node failed or shutdown. This condition has been corrected and the rejoin operation will stop if another node failure is detected.

  • There was a race condition associated with recovery and replaying multi-partition transactions. The symptom was a "duplicate counter collision" error during recovery. In previous versions, it was possible that a second recovery would succeed since the race condition might not be triggered the second time. V6.7 eliminates the race condition and resolves the problem.

  • There was an issue where the BulkLoader programming interface could lose track of the correct number of processed records, resulting either in a negative value for outstanding rows, which caused the drain() operation to never complete, or an incorrect count of the successful rows inserted. Both of these issues have been resolved.

  • There was also a race condition in the BulkLoader, aggravated by frequent insert failures, where it could lose track of exactly how many failures had occurred. The consequences of this race condition were that the loader utilities (such as csvloader) could report an incorrect number of failures and for applications using the BulkLoader API, any callback associated with failures might not be invoked for all failure cases. This issue has been resolved.

  • The VoltDB Java client includes a timeout value that is user settable and breaks a hung connection after the specified time limit. However, if the server went down while the Java client was initially authenticating the connection, the timeout was not observed and the client process would hang indefinitely. This issue has been resolved and authentication requests now obey the connection timeout limit.

  • Previously, when certain multi-column indexes were present, an attempt to compare a VARCHAR column value to a string longer than the column's maximum length would result in a runtime error. For example, if PRODUCTCODE is defined as VARCHAR(2), the expression "WHERE PRODUCTCODE = 'ABC'" would return an error rather than false. This issue has been resolved. Comparisons of long strings to VARCHAR columns now return the appropriate true or false response for the conditional expression.

3. Release V6.6.6 (February 21, 2017)

3.1.

Recent improvements

The following limitations in previous versions have been resolved:

  • Previously, if updating a view took too long (for example, if the table contained significant amounts of data before the view is created), the statement could fail, causing the database server process to stop on either an individual node or the cluster as a whole. This issue has been resolved.

  • There was an issue where, if multiple VoltDB client connections were created in separate threads of a Java application, a race condition could cause the clients to attempt to use shared resources before they were properly initialized. The symptom of this problem was that the connections failed reporting an illegal state exception accessing the ReverseDNSCache. This issue has been resolved.

4. Release V6.6.5 (February 3, 2017)

4.1.

Recent improvements

The following limitation in previous versions have been resolved:

  • There was a race condition where occasionally, when starting XDCR, the request for a synchronizing snapshot from the consumer cluster resulted in an error on the producer cluster indicating it could not initiate the snapshot This issue has been resolved.

5. Release V6.6.4 (December 30, 2016)

5.1.

Recent improvements

The following limitations in previous versions have been resolved:

  • Deleting data frequently can trigger memory compaction. In rare cases, this compaction could coincide with both a simultaneous snapshot and an attempt to elastically add a node to the cluster. The result of this race condition was that the add operation failed with a message starting "HOST: Elastic index clear is not allowed while an index is present." This issue has been resolved.

  • Updating procedure classes (for example, using the LOAD CLASSES directive in sqlcmd) was using more memory than required to read the classes from the JAR file, resulting in excessive garbage collection in the Java heap. The more classes in the JAR file, the more noticeable the condition. Memory allocation has been improved for this operation, alleviating the issue.

  • Previously there was an edge case where, if you started a K-safe cluster as a DR replica, but the cluster never connected to a master database, then you promoted the cluster, a node failed, and you attempted to rejoin the node, the cluster would fail. This issue has been resolved.

  • There was an issue related to passive DR, where if a node attempts to rejoin the master cluster while the replica is initiating replication, the rejoin would fail and the master cluster break replication. This issue has been resolved.

  • When starting data replication (DR), existing data is exchanged using a snapshot. If the DR process cannot start the snapshot because another snapshot is already in progress, it retries. Previously, the DR process only retried the snapshot a limited number of times. This issue has been corrected and the snapshot is retried for up to two hours, or until the snapshot starts or DR is reset.

6. Release V6.6.3 (November 28, 2016)

6.1.

Recent improvements

The following limitations in previous versions have been resolved:

  • During database replication (DR), consumer and producer nodes communicate using separate threads for each partition. It is possible for the producer node to get an exception (for example, under certain scenarios when the consumer node fails). In the past, these exceptions would stop the listener thread and no new connection could be established until DR was reset. This issue has been resolved and the listener threads now catch exceptions and remain available for new connections.

  • It was possible, when using database replication (DR), for one of the threads used by DR to fail with a ConcurrentModificationException error, which caused replication to stop. This issue has been resolved.

7. Release V6.6.2 (November 21, 2016)

7.1.

NoSuchElementException when running XDCR

There was a race condition in earlier releases where cross data center replication (XDCR) could be terminated, reporting the error "NoSuchElementException". The clusters remained active but no further replication occurred until DR was reset and restarted. This issue has been resolved.

8. Release V6.6.1 (November 17, 2016)

There were two issues, specific to V6.6, that are resolved in this release.

8.1.

Slow DR performance and out-of-memory issues

Database replication (DR) supports different topologies on the two clusters. To achieve this, if single-partitioned procedures attempt to update records including different partition column values, the consumer cluster may have to redirect the write operations, essentially turning the single-partition procedure into a multi-partition transaction. This is most notable for actions such as batch inserts using utilities such as csvloader. However, for V6.6, this operation was not optimized to remain single-partitioned if the clusters had the same topology. As a consequence, when using DR with many multi-partition transactions or large single-partition transactions involving multiple partition column values, performance suffered and the cluster could ultimately crash with an out of memory error. This problem is resolved in 6.6.1 and later releases.

8.2.

Multi-partition deadlock on node failure

In certain rare cases, a node failure in a K-safe V6.6 cluster could result in the database hanging reporting a multi-partition deadlock. This issue is specific to V6.6. and is resolved in all later versions.

8.3.

Additional improvements

In addition to the preceding V6.6-specific fixes, the following limitation has been resolved:

  • In previous releases, if a network error occurs after the initial DR snapshot is processed but before all partitions have started replicating, some partitions may never start replicating. This issue has been resolved.

9. Release V6.6 (September 8, 2016)

9.1.

New VoltDB command line

Beta testing of the new command line is complete. The new, improved commands init and start are ready for production use and are the recommended method for running VoltDB, replacing the legacy commands create, recover, rejoin, and add.

The new commands are described in the reference page for the voltdb command. The example applications that come with the VoltDB software use the new commands. The remainder of the documentation will be updated to reflect the new commands over the next few releases.

9.2.

Separate resource paths per server using new command line

When initializing the database root directories for a cluster with the voltdb init command, each server can specify separate paths for its resources. For example, the database root itself can be different. Also explicit paths in the deployment file, such as the paths for command logs, export overflow, snapshots, etc can be unique per server if you wish. These path attributes are not checked when comparing the deployment files between servers on cluster startup.

9.3.

Support for running different versions of VoltDB on XDCR clusters

It is now possible to use different versions of VoltDB on the two clusters used for cross data center replication (XDCR). The main use of this capability is to allow "live" software upgrades of VoltDB by upgrading each cluster in sequence. See the section on "Upgrading VoltDB Software" in the VoltDB Administrator's Guide for more information. (Note that running different versions for an extended period is not recommended, due to incompatibilities resulting from new SQL syntax introduced by later versions of the software.)

9.4.

Joins in views

VoltDB now supports joining multiple tables in a view using inner joins. For best performance, having an index on the joining column(s) is strongly recommended. See the description of the CREATE VIEW statement in the Using VoltDB manual for details.

9.5.

RANK() window function in the selection list

VoltDB also now supports use of the RANK() window function in the selection list of a SELECT statement. For example:

SELECT city, population,
    RANK() OVER (ORDER BY population) FROM cities...

See the description of the SELECT statement in the Using VoltDB manual for details.

9.6.

Improvements to snapshots

The following improvements have been made to saving and restoring snapshots:

  • When saving a manual snapshot, if the specified target directory does not exist, VoltDB will attempt to create the directory before saving the snapshot.

  • When restoring partial snapshots (snapshots that contain only selected tables from the original database), you can restore multiple snapshots as long as the tables covered by the snapshots do not overlap. That is, each snapshot contains data from unique tables.

9.7.

New trigonometric functions

New SQL functions for sine, cosine, tangent, secant, cosecant, and cotangent are available. The functions are SIN(), COS(), TAN(), SEC(), CSC(), and COT(), and take one numeric argument representing an angle measured in radians. See the appendix of SQL functions in the Using VoltDB manual for details.

9.8.

More flexibility for SQL functions without input arguments

SQL functions that do not require any input arguments, such as NOW() and PI(), can now be specified either with or without the parentheses. Previously, some functions required the parentheses and some did not accept them. Both forms are now allowed. For example, the current timestamp value can be specified as either CURRENT_TIMESTAMP() or CURRENT_TIMESTAMP.

9.9.

New directives in sqlcmd to echo text to the output

Two new directives, ECHO and ECHOERROR, let you write comments into the output of the sqlcmd utility. Each directive writes any following text on the line, as is, to standard output or standard error, respectively. See the sqlcmd command in the Using VoltDB manual for details.

9.10.

Heap size recommendations in VoltDB Management Center

The web-based VoltDB Management Center now includes recommended settings for the Java heap in the sizing worksheet. The recommendations are based on the current database schema, plus the desired cluster configuration and estimated data volume. After starting the database, go to the Management Center (http://{server-address}:8080), click on the Schema tab and then Size Worksheet to use the worksheet.

The Management Center also includes the current and recommended heap settings for the current configuration in the Overview section of the Schema tab.

9.11.

Support for Ubuntu 16.04

VoltDB now supports Ubuntu 16.04 as a base platform for production usage.

9.12.

Additional improvements

In addition to the new features and capabilities described above, the following limitations in previous versions have been resolved:

  • There was an issue when using the MIN() function to aggregate values from a column where, if the column contained null values and was indexed and was being evaluated in a conditional expression using the less-than or less-than-or-equal-to operators, the expression might not be evaluated properly. This issue has been resolved.

  • An issue with type conversion in C# was causing the VoltDB C# client library to produce incorrect values for DECIMAL typed values. For example, when declaring a VoltDecimal type to pass as an argument to a stored procedure. The client now internally declares and passes VoltDecimal as strings, resolving the problem.

  • Previously, if the JDBC export connector could not connect to the target database for any reason (such as a misspelled connection string), the VoltDB cluster would stop. This is no longer the case. Now, if the JDBC connector fails to connect, the connector stops but the VoltDB database continues to run.

  • In database replication (DR), connections between the consumer and producer clusters get dropped and recreated for several reasons. Previously, when a connection was dropped, the producer waited for a network timeout to close the connection. This could result in harmless but annoying errors in the logs. The producer nodes are now more proactive in closing dropped connections.

  • An issue was introduced in the last release (V6.5) where if a view was created after the associated table was already populated and the view did not have a GROUP BY clause, the COUNT(*) selector could produce incorrect results, ignoring the pre-existing rows. This issue has been corrected.

10. Release V6.5 (July 28, 2016)

10.1.

Simplified VoltDB command line (BETA)

This release introduces new, improved commands for starting VoltDB database servers. The new command line will ultimately replace the create, recover, rejoin, and add commands with just two commands: init and start.

The new commands are available for beta testing and are described in a separate beta release letter. Both new and old commands will be supported for the remaining 6.x releases. However, the old commands will be deprecated in the future so we encourage all interested customers to try the new commands and provide feedback to ensure they meet your needs. Thank you.

10.2.

Improved performance of large schema updates

Changes have been made to how VoltDB distributes schema updates (including adding and removing stored procedures) to cluster nodes. These changes can significantly reduce the time required for the schema update to complete on clusters with a large number of sites per host.

10.3.

New, improved VoltDB clients for ODBC and Go

Improved clients for ODBC and the Go programming language are being released simultaneously with this release of VoltDB. The Go client now supports client affinity, standard Go syntax for connecting to a database, as well as automatically reconnecting to broken connections. Both clients will be available from the VoltDB web site client download page.

10.4.

Support for FULL OUTER JOIN

The VoltDB SELECT statement now supports full outer joins.

10.5.

New SQL functions to validate TIMESTAMP values

TIMESTAMP values are stored as eight-byte integers. It is possible to enter an eight-byte numeric value that is not actually a valid timestamp. The new functions MIN_VALID_TIMESTAMP() and MAX_VALID_TIMESTAMP() give you access to the valid minimum and maximum values. The function IS_VALID_TIMESTAMP() compares a TIMESTAMP value and returns true or false depending on whether the value falls within the valid range or not. See the appendix of SQL functions in the Using VoltDB manual for details.

10.6.

Ability to use boolean literals in SQL conditional expressions

It is now possible to use boolean literals — such as TRUE and FALSE or 1=1 and 1=0 — when evaluating conditional expressions in SQL statements. For example:

SELECT Silos FROM Barns WHERE TRUE;
SELECT Rooms FROM Houses WHERE 1=0;

10.7.

The csvloader utility can now interpret header rows

The csvloader utility has a new argument, --header, which indicates that the first row of data is a header row containing the names of the columns. These column names must match names of columns in the database table. By using --header, the columns in the CSV file can be in a different order than the table columns. Also, the CSV file can have a subset of columns, as long as the missing columns are declared with a default value or not explicitly NOT NULL in the database schema. See the description of the csvloader utility in the Using VoltDB manual for details.

10.8.

Improved handling of missing servers in sqlcmd --servers list

The sqlcmd utility lets you specify multiple servers to connect to using the --servers argument. However, previously if any of the listed servers were unavailable, sqlcmd would not start. Now, sqlcmd ignores unavailable servers and will continue as long as at least one of the listed servers is accessible.

10.9.

When restoring data, constraint violations no longer stop the database.

Previously, any constraint violation during a restore operation, was interpreted as a fatal error and stopped the database. This behavior has changed. Constraint violations during restore are now reported as warnings and the data records logged to a CSV file. This allows you to restore as much data as possible and resolve the constraint violations manually once the restore operation completes.

10.10.

Automatic reconnection added to JDBC driver

The VoltDB JDBC driver can now automatically reconnect to lost connections. To enable this feature, you add an argument to the connection string, like so:

Connection c = DriverManager.getConnection(
     "jdbc:voltdb://svr1:21212,svr2:21212?autoreconnect=true");

See the section on the JDBC interface in the Using VoltDB manual for details.

10.11.

New option to control whether the JDBC export connector creates tables in the target database

There is a new parameter for the JDBC export connector that lets you specify whether the connector should automatically create tables matching the export streams in the target database. The new parameter is createtable and is described in the documentation of the JDBC export connector in the Using VoltDB manual.

10.12.

Additional improvements

In addition to the new features and capabilities described above, the following limitations in previous versions have been resolved:

  • In previous releases, there was some overhead associated with creating the first row or deleting the last row from a table, which could create a performance issue for applications that filled and emptied database tables frequently. This overhead has been minimized to eliminate the performance bottleneck.

  • There was a rare issue related to database replication (DR) when switching from passive DR to XDCR that could cause the XDCR clusters to diverge without reporting any errors. For example, this could occur if a passive DR replica was promoted, stopped, recovered, then had its data restored from a snapshot after the recovery, then enabled as an XDCR cluster. This edge case has been resolved.

  • In previous releases, there was a race condition that could cause problems if a PARTITION PROCEDURE statement was executed while invocations of that stored procedure were in flight. The symptoms of the problem were that ad hoc queries could not be executed by either sqlcmd or the VoltDB Management Center. This issue has been resolved.

  • There was an issue with Kinesis export where export would not resume properly if the all of the nodes of a K-safe cluster failed and rejoined, one after another. This issue has been resolved.

  • Previously, if file export encountered disk problems when rolling over the file (for example, if the disk was unmounted), export would not resume properly even if the cause of the disk error was corrected. This issue has been resolved.

  • In database replication (DR) if a producer cluster node fails, the consumer cluster selects a node different to use as a source for any partitions sent by that producer node. However, in previous versions, the consumer cluster could unnecessarily switch source nodes resulting in unnecessary thrashing and duplicate sending of logs. This condition has been corrected and the consumer cluster changes source nodes more judiciously.

  • The JDBC export connector exports rows in batches. A batch of inserts succeeds or fails as a group. The connector has been enhanced so that if a batch fails, the connector logs the specific rows that caused the failure.

  • There was an issue where, when using synchronous command logging, large snapshots would produce excessive heap usage, often causing long, intermittent delays as a result of garbage collection. This problem has been resolved.

  • There were cases, in database replication (DR), where if a consumer cluster fell behind in applying logs, its DR buffers could grow rapidly using up too much heap space. This issue has been corrected by having the consumer stop reading logs when the buffers get too full, allowing the overflow logs to be stored on the producer instead (where the logs are compressed and can therefore be buffered more efficiently).

11. Release V6.4 (June 24, 2016)

11.1.

Heterogeneous XDCR clusters

It is now possible to use Cross Datacenter Replication (XDCR) to perform active replication between clusters of different sizes. That is, clusters with a different number of nodes, K-safety, or sites per host.

11.2.

Additional information in the XDCR conflict logs

The conflict logs for Cross Datacenter Replication (XDCR) contain additional information. Two new columns were added to record the timestamp when the conflict occurred and the ID of the cluster reporting the conflict. These are in addition to the existing columns recording the timestamp and cluster ID of the transaction that generates the conflict. Also an extra row marked as DEL is added when the conflict is the result of a DELETE operation. See the chapter on Database Replication in the Using VoltDB manual for more information.

11.3.

Ability to use SSL for VoltDB web interface

You can now enable encryption for the VoltDB httpd port, which is used for the JSON interface and access to the VoltDB Management Center. This means all access to these features will use the HTTPS protocol rather than the unencrypted HTTP. You enable HTTPS in the deployment file. See the appendix on Server Configuation Options in the VoltDB Administrator's Guide for more information.

11.4.

Ability to turn on admin mode from the command line when starting VoltDB

Previously, admin mode could be enabled when starting the cluster only by modifying the deployment file before starting. This is not an ideal approach for a setting that can be changed at runtime. So starting with 6.4 turning on admin mode on start is done through a new command line flag, --pause. By adding --pause to the voltdb create or voltdb recover command, you can start the cluster in admin mode, which is handy when wanting to perform administrative tasks, such as modifying the schema or restoring a snapshot, before allowing clients full read/write access. (Use of the deployment file to enable admin mode is still supported, but deprecated in favor of the new command line flag.)

11.5.

New and enhanced SQL functions

Three new SQL functions have been added:

  • LOG10() returns the BASE-10 logarithm of a value

  • ROUND() returns a numeric value rounded to the specified decimal place

  • STR() returns a formatted string of a numeric value

In addition, the MOD() function has been enhanced to operate on either DECIMAL or INTEGER values. ee the appendix on SQL Functions in the Using VoltDB for more information.

11.6.

Kinesis import and export and new import properties

A new source for the VoltDB importer, Amazon Kinesis, is now available. Kinesis joins Kafka as the supported sources for import. There is also an export connector for Amazon Kinesis available from the VoltDB public Github repository (https://github.com/VoltDB/export-kinesis). In addition, new properties have been added to control the operation of the import CSV and TSV formatters. See the chapter on Importing and Exporting Live Data in the Using VoltDB for more information.

11.7.

Improved read level consistency during network partitions

A potential read consistency issue was identified and resolved. Read transactions can access data modified by write transactions on the local server, before all nodes confirm those write transactions. During a node failure or network partition, it is possible that the locally completed writes could be rolled back as part of the network partition resolution.

This could only happen in the off chance that the read transaction accesses data modified by an immediately preceding write that has not been committed on all copies of the partition prior to a network partition. But to ensure this cannot happen, reads now run on all copies of the partition, guaranteeing consensus among the servers and complete read consistency. However, it also incrementally increases the time required to complete a read-only transaction in a K-safe cluster. If you do not need complete read consistency, you can optionally set the cluster to produce faster read transactions using the old behavior, by setting the read level consistency to "fast" in the deployment file. See the appendix on Server Configuration Options in the VoltDB Administrator's Guide for more information.

11.8.

Additional improvements

In addition to the new features and capabilities described above, the following limitations in previous versions have been resolved:

  • There was an issue where VoltDB did not guard against invalid timestamp values, which could crash the server process with a "bad_year" exception. For this release, invalid values now generate runtime errors without failing the server process. Future releases will provide additional safeguards.

  • Previously, attempting to rejoin nodes simultaneously could result in cryptic fatal error messages. The rejoin operation has been changed to allow only one rejoin at a time. Now, attempting to rejoin two or more nodes at once results in each additional node waiting until the preceding rejoin completes before starting.

  • Previously, VoltDB failed to plan a statement if it included a CASE/WHEN clause evaluating a LIKE predicate. For example:

    SELECT CASE WHEN state LIKE 'NY' THEN 'New York' END ...

    This problem has been fixed.

  • Previously, attempting to configure two importers with different formatters would not work as expected. The formatter selection in the last configuration was used for all importers. This issue has been resolved.

  • It is possible to create a SQL statement with so many predicates (for example, more than 350 AND predicates) that the planner runs out of available stack space. Previously, if this happened while running an ad hoc query in sqlcmd, sqlcmd would become unresponsive. This issue has been resolved and sqlcmd now returns an appropriate error message to the user.

  • Previously, sqlcmd did not accept certain string values as TIMESTAMP arguments to procedure invocations that are valid in ad hoc SQL. For example, string arguments with only a date portion (2015-11-11) or a time without fractional seconds (2015-11-11 00:00:00) would return an error as an "unparseable date". This issue has been resolved.

  • There was an issue where VoltDB rejected selection expressions involving aggregate functions (such as SUM()) and columns not listed in the GROUP BY clause. The result was the error message "unsupported expression node 'simplecolumn'". Such expressions are supported and the spurious error has been removed.

  • In earlier 6.x releases, VoltDB did not handle the declaration of large VARCHAR columns consistently, and as a result could allow the creation of rows with a maximum size larger than the 2MB limit. This issue was not visible to the user until the system tried to save an over-sized row to a snapshot resulting in a fatal "buffer has no space" error. This issue has been resolved and the system now properly limits VARCHAR columns when the table is defined.

  • VoltDB does not currently support constant values (such a true, false, or 1=0) as boolean expressions in SQL statements, including in the selection list. However, the associated error reported that "VoltDB does not support WHERE clauses containing only constants" even if the boolean constant was not in the WHERE clause. This error message has been improved to more accurately reflect the condition it is reporting.

  • There were two race conditions related to K-safety and network partitions that could result in differences between the persisted data and responses to the client.

    • In the first case, if the cluster divides into two viable segments, a write transaction being processed during the partition could be reported as successful by the minor segment before it shuts down due to network partition resolution, although the transaction is never committed by the nodes of the surviving majority segment.

    • In the second case, again where the cluster divides into two viable segments, write transactions in-flight during the network partition can be written to command logs separately to the two segments. On recovery, not all of those write transactions may get replayed.

    Both cases, found in testing, only occurred under certain conditions and in specific configurations where a network partition could result in two viable cluster segments. Both cases have been resolved.

12. Release V6.3 (May 17, 2016)

12.1.

Changing schema during Database Replication (DR)

It is now possible to modify DR tables. In passive replication, changing the schema of DR tables on the master database automatically pauses DR on the replica until you make matching schema changes there. In cross data center replication (XDCR), you should pause both clusters and ensure all binary logs have been processed before making the schema changes and resume on both databases. See the chapter on Database Replication in the Using VoltDB manual for more information.

12.2.

More examples, better notes

The examples in the VoltDB kit have been reorganized and significantly expanded. Some of the new examples demonstrate call center tracking, mobile ads, and managing time windows for selectively deleting old data. See the README.md file in the /examples folder for more information and a complete list of examples.

12.3.

Support for geospatial data in export and the JDBC interface

VoltDB now supports exporting geospatial columns through the export connectors, where the data is converted to well-known text (WKT) strings. This release also adds support for the JDBC setObject and getObject methods for converting geospatial data natively to and from the VoltDB GeographyValue and GeographyPoinValue types in Java to the database GEOGRAPHY and GEOGRAPHY_POINT datatypes.

12.4.

Additional improvements

In addition to the new features and capabilities described above, the following limitations in previous versions have been resolved:

  • There was an issue where deleting the contents of a view on a stream would result in the view no longer updating after the delete. This issue has been resolved.

  • The new --force argument lets you create a new database even if data from a previous session exists, which successfully overwrites previous command logs and snapshots. However, in V6.2, --force did not delete any existing export overflow, which could cause errors for export during the new database session. This problem has been corrected; --force now explicitly deletes any pre-existing export overflow.

  • Earlier versions of the HTTP export connector did not work properly with Hadoop version 2.7 or later. This issue has been resolved.

  • Previously, in a K-safe environment, if one node was rejoining the cluster but had not fully completed the rejoin process, using voltadmin stop to remove another node could crash the cluster. This issue has been resolved; voltadmin stop is no longer allowed while a rejoin is in progress.

  • In previous releases, frequent schema changes while export is enabled could result in excessive thread use, ultimately causing the database to crash with the error "java.lang.OutOfMemoryError: unable to create new native thread". This problem has been corrected.

  • There were certain cases where aliases assigned to columns when joining multiple tables could return an error that the alias name was not found if the alias was used in a GROUP BY clause. This problem has been corrected.

  • Previously, if the Kafka importer encountered incorrectly formatted input, such as badly formatted CSV strings, the importer would stop. However, it did not report any error in the log file. This issue has been resolved and the importer now reports an error whenever invalid input causes the import process to stop.

  • There was an issue where, if a view included either the MIN() or MAX() function and the table associated with the view had an index, attempting to alter the table (for example, adding a column) would result in a fatal error indicating that VoltDB could not find the index and stopping the cluster. This issue has been resolved.

  • The back pressure mechanism for Kafka import has been adjusted to avoid situations where Kafka messages could be missed by the import process.

  • In the previous release (6.2) there was an issue associated with cross data center replication (XDCR) when resetting and restarting replication. If one cluster (A) failed and DR was reset on the remaining cluster (B) with the voltadmin DR RESET command, when cluster A reestablished DR, not all transactions on cluster A were properly replicated to cluster B. This problem has been resolved.

13. Release V6.2.1 (April 29, 2016)

13.1.

Recent improvement

The following limitation has been resolved:

  • There was an issue where resetting a DR cluster (with the voltadmin DR RESET command) while a node is actively processing write transactions could cause the node to fail with a segmentation fault. This issue has been resolved.

14. Release V6.2 (April 12, 2016)

14.1.

Support for different cluster sizes in passive Database Replication (DR)

Previously, passive Database Replication (DR) required both the master and replica clusters to have the same configuration; that is, the same number of nodes, sites per host, K factor, and so on. You can now use different size clusters for passive DR. You can even use different values for partition row limits on DR tables. However, be aware that using a smaller replica cluster could potentially lead to memory limitation issues. Be sure to configure both clusters with sufficient capacity for the expected volume of data.

14.2.

VoltDB avoids overwriting existing database files in the voltdbroot directory

The behavior of the voltdb create command has changed. If you attempt to create a new database in a voltdbroot directory that contains command logs, snapshots, or other artifacts of a previous session, the voltdb create command issues an error. This behavior is to avoid you accidentally deleting data when you should be using the voltdb recover command. You can override this default behavior by adding the --force argument to the voltdb create command to explicitly overwrite files from the previous database session.

14.3.

New function creates and validates GEOGRAPHY data in single step

The POLYGONFROMTEXT() converts well-known text (WKT) representations of polygons to the GEOGRAPHY datatype and the ISVALID() function verifies that the resulting polygon meets the requirements for VoltDB. The new function, VALIDPOLYGONFROMTEXT() performs both steps in a single function, returning an error if the resulting polygon is not valid.

14.4.

Support for geospatial datatypes in C++ client API

The VoltDB C++ client API now supports use of the geospatial datatypes GEOGRAPHY and GEOGRAPHY_POINT.

14.5.

VoltDB command line utilities now prompt for the password

If you specify a username on the command line but not a password, the VoltDB command lines utilities such as sqlcmd and csvloader will prompt you for the password. This feature is useful if you are scripting commands for a VoltDB database with security enabled. You no longer need to hardcode passwords into the script.

14.6.

Availability of the VoltDB Deployment Manager

The VoltDB Deployment Manager is now available for general use. The Deployment Manager lets you configure and start VoltDB clusters using either a web-based interface or a programmable REST API. See the chapter on "Deploying Clusters Using the VoltDB Deployment Manager" in the VoltDB Administrator's Guide for details.

14.7.

Additional improvements

In addition to the new features and capabilities described above, the following limitations in previous versions have been resolved:

  • Previously, an UPSERT statement specifying a subset of columns would update the values of all columns in the table row. This issue has been corrected.

  • There was an edge case when using database replication (DR) where if, after DR started, a K-safe cluster stopped and recovered, then one node failed and rejoined, and finally another node on the cluster stopped, the first node would also stop. This issue has been resolved.

  • There is an issue in previous releases with conflict resolution in cross datacenter replication (XDCR). Timestamp mismatches were not resolved correctly, causing the two databases to diverge. However the conflict log reported no conflict. The conflict resolution process has now been corrected.

  • Previously, if two subqueries returned columns with the same names, it was possible for the statement to return incorrect information or report an error. In the short-term, all cases that might result in incorrect results now report a meaningful error. The workaround is to define unique aliases for such columns. In the longer-term, a future release will allow valid cases of identical column names from subqueries and process them appropriately.

  • In previous releases, users defined in the deployment file with a space in the username could cause the database process to fail. Spaces are not allowed in usernames. Spaces in usernames are now rejected with a meaningful error message at startup.

  • Previously, when using the file export connector, if the CSV file location became inaccessible (due to lack of permissions or disk failure), the connector did not report an error or write export data to the export overflow and so export data was lost. This issue has been resolved.

  • In previous releases, deployment changes, made to the running database, such as the addition of new users or export connectors, were recorded in the command log as a transaction. If no snapshots were taken before the database shutdown, attempting to recover the logs could fail with the error, "Invalid catalog command", when replaying the deployment change into the new database state. This issue has been resolved.

  • The maximum allowable clock skew between nodes when the cluster starts has been extended from 100 to 200 milliseconds.

  • Previously, attempting to rejoin a node to a running cluster, where the joining node used an incompatible version of the VoltDB software, both the joining node and the running cluster would fail. This issue has been corrected and now only the rejoining node fails if there is a software version mismatch; the cluster is unaffected.

  • When using database replication (DR), if the replica was promoted, then the schema or configuration was updated (for example, using DDL statements or changing the deployment file through voltadmin update or the VoltDB Management Center Admin tab), the cluster's DR connection was re-enabled, resulting in spurious warnings in the log reporting that the cluster failed to connect to the DR producer. This issue has been resolved.

  • In previous releases, certain @Statistics results could return erroneous negative values for memory usage. The datatype for these columns has been increased (to BIGINT) to allow for appropriately sized positive values.

  • The maximum length of an ad hoc query (in sqlcmd, the VoltDB Management Center, or through the @AdHoc system procedure) has been increased from 32 kilobytes to 1 megabyte. This means that it now possible to submit and process more complex ad hoc queries than before.

  • There was an issue in earlier releases where, if the database was in admin mode, restoring a snapshot would not load the associated schema as expected. This problem has been corrected.

  • Previously, the sqlcmd --output-skip-metadata flag did not, as advertised, remove all metadata from the output. This issue has been resolved.

  • The Java client library uses backpressure to "pause" client transaction requests if there are too many procedures queued on the server. Unfortunately, the original implementation of backpressure in the client API could result in a "value out of range" error. This problem has been corrected.

15. Release V6.1 (March 4, 2016)

15.1.

New streaming data capabilities

This release introduces a new concept in VoltDB: streams. Streams act like virtual tables. You declare them like tables using the CREATE STREAM statement and you insert data into the stream using the INSERT statement. However, data inserted into a stream is not actually stored in the database. Data inserted into a stream can be analyzed (using views) and streamed directly to other business systems (using export).

To analyze streaming data, you define a stream using the CREATE STREAM statement, then define a view on that stream using the CREATE VIEW statement. This view allows you to perform summary analysis on the data as it passes through the database without paying the penalty of actually storing the data, all in a transactionally consistent way.

Although you cannot modify the underlying data of such views — because the stream is transient — views on streams are unique in that you can update the view itself if needed. For example, you can create a daily summary of a stream by resetting the view's values to zero at midnight using a DELETE FROM {view-name} or UPDATE {view-name} statement.

You can also export data from streams to external systems using VoltDB's existing export infrastructure. In fact, streams replace the old EXPORT TABLE concept. Instead of defining a table then declaring it as an export table, you now define a stream and assign it to an export target all in one statement. For example:

CREATE STREAM visits 
  EXPORT TO TARGET archive (
    user_id BIGINT NOT NULL,
    login TIMESTAMP
);

In the export deployment configuration, the old stream attribute is now replaced by target, to make the terminology consistent. Note that, although the EXPORT TABLE DDL statement and the deployment stream attribute are now deprecated, they will still be supported for backwards compatibility until some future major release.

See the description of the CREATE STREAM statement in the Using VoltDB manual for more information.

15.2.

Support for indexes on geospatial GEOGRAPHY columns

VoltDB now supports GEOGRAPHY columns in indexes. The index can be applied to instances of the CONTAINS() function where the indexed column is the first argument. For example, an index including the GEOGRAPHY column border could optimize the following query:

SELECT assets.id, counties.county, counties.state 
    FROM counties, assets
    WHERE CONTAINS(counties.border, assets.loc);

15.3.

New boolean function for measuring geospatial distances

The new geospatial function DWITHIN() determines if two geospatial values (two points or a point and a geographical region) are within a specified distance of each other. For example, the following query returns all the restaurants within 5,000 meter of a tourist:

SELECT r.name, r.address, DISTANCE(r.loc,t.loc)
    FROM restaurants AS r, tourists AS t
    WHERE DWITHIN(r.loc, t.loc,5000) AND t.id = ?
    ORDER BY DISTANCE(r.loc,t.loc) ASC;

15.4.

Ability to load geospatial values with the csvloader

The csvloader now supports the ability to load values into geospatial columns (GEOGRAPHY or GEOGRAPHY_POINT) by including the values as well-known text (WKT) in the CSV input. See the chapter on "Creating Geospatial Applications" in the VoltDB Guide to Performance and Customization for information on creating WKT compatible with the grospatial datatypes.

15.5.

Beta release of VoltDB Deployment Manager

We are working on a new deployment process for configuring and starting VoltDB clusters. The VoltDB Deployment Manager is a daemon process that supports both a RESTful API for scripting and an fully interactive web interface. Although not ready for production use, a beta version is included in the current software kits. Customers interested in trying out the new Deployment Manager and providing feedback should contact VoltDB support for more information.

15.6.

Additional improvements

In addition to the new features and capabilities described above, the following limitations in previous versions have been resolved:

  • In previous releases, if the operator use the wrong command to rejoin a failed node to the cluster (using voltdb recover instead of voltdb rejoin) the cluster would fail. Now, the invalid operation fails but the cluster is not affected.

  • Previously, if the export subsystem could not write data to the export_overflow directory (for example, if the disk was full), the VoltDB server process would generate errors in the log but not stop. However, this behavior results in lost data. So, to preserve data integrity and durability, the server process now fails in this situation. In a K-safe cluster, the other nodes will continue, keeping the database online, until operators can address the system issues with the failed node and rejoin it to the cluster.

  • There was an issue where a SQL query would generate a run-time error if the query contained both a JOIN and a {column-value} IN {list} condition, where the column being evaluated was indexed. This problem has been corrected.

  • VoltDB V6.0 corrected many situations where ambiguous column references were previously allowed . However, there were still some edge cases that were not covered. Specifically, the ORDER BY clause in a JOIN query still allowed ambiguous column references. This issue has been resolved.

  • Previously, if a WHERE clause compared a VARCHAR column to a value (such as WHERE TEXT_COLUMN = '12345'), the column is indexed, and the value is longer than the maximum length of the column, then the query would generate a runtime error stating that the value exceeds the size of the column. However, the query is a comparison, not an insertion, so no error should be required. This condition has been corrected.

  • There was an issue in the Kafka importer where, if all records for a topic were imported (that is, there were no outstanding messages in the queue), if the database stopped and restarted with a voltdb recover command, the import would restart from the beginning rather than at the last imported record. This problem has been corrected.

  • In rare cases, the Kafka importer issued an error stating that it "failed to stop the import bundles" when the database schema or deployment file settings were changed. Although annoying, this error did not indicate any failure in the system itself. This misleading error message has been corrected.

  • There was an issue with SELECT queries of partitioned tables where, if the partitioning column was included in the selection list multiple times, and that column was also part of the GROUP BY clause, the query returned incorrect results. This issue has been resolved.

16. Release V6.0.1 (February 24, 2016)

16.1.

Performance tuning for cross datacenter replication (XDCR)

When running in virtualized environments, it is possible for the thread that creates binary logs for database replication (DR) to compete with transactions on the local cluster, causing occasional increased latency. To mitigate this situation, the initial size of the buffer for binary logs has been changed to 512KB, which is optimized for most workloads.

However, If your workload is observing long latencies when running cross datacenter replication (XDCR), you may need to adjust the size of the buffer. To change the default buffer size, set the VoltDB variable DDR_DEFAULT_BUFFER_SIZE before starting the database process, specifying the size in bytes:

export VOLTDB_OPTS="-DDR_DEFAULT_BUFFER_SIZE=nnnn"

16.2.

Recent improvements

The following limitations in previous versions have been resolved:

  • There was an issue with database replication (DR) where, if multiple transactions generated excessively large binary logs in a short period of time (more than 2 megabytes each in under a second) DR could fail, possibly taking the database cluster with it. The symptom when this occurred was that one or more nodes would fail with a DR buffer overflow error. This issue has now been corrected.

  • Previously, using the JDBC method getFloat() to retrieve a negative value or zero (<=0) would result in the JDBC interface throwing an exception. This issue has been resolved.

17. Release V6.0 (January 26, 2016)

17.1.

Updated operating system and software requirements

The operating system and software requirements for VoltDB have been updated based on changes to the supported versions of the underlying technologies. Specifically:

  • Ubuntu 10.4, Red Hat and CentOS releases prior to 6.6, and OS X 10.7 are no longer supported. The supported operating system versions are CentOS 6.6, CentOS 7.0, RHEL 6.6, RHEL 7.0 and Ubuntu 12.04 and 14.04, with support for OS X 10.8 and later as a development platform.

  • The VoltDB server process requires Java 8. The Java client library supports both Java 7 and 8.

  • The required version for the Python client and VoltDB command line utilities has been upgraded from 2.5 to 2.6.

17.2.

Memory resource monitoring is on by default

Resource monitoring is enabled by default with a memory limit of 80%. If memory usage exceeds this limit, the database is placed in read-only mode until usage drops below the limit. You can alter the resource limits in the deployment file. See the section on resource monitoring in the VoltDB Administrator's Guide for details.

17.3.

New geospatial datatypes and functions

VoltDB now supports two new datatypes, GEOGRAPHY and GEOGRAPHY_POINT, and several new functions optimized for geospatial data. These datatypes are fully integrated in the VoltDB durability features such as snapshots and command logging. However, tables containing geospatial columns are not currently supported for export or database replication (DR). Integration of these capabilities will be added in a future release.

17.4.

Kerberos support for VoltDB Management Center and JSON interface

VoltDB now supports Kerberos security for the VoltDB Management Center (VMC) and the JSON interface. To allow access from VMC and JSON, the server JAAS login configuration must include two additional entries for the Java Generic Security Service (JGSS): one for the VoltDB service principle and one for the server's HTTP service principle.

17.5.

JMX support deprecated

The VoltDB Enterprise Manager, which was deprecated in V5.0, has been removed from the kit. JMX support, which was added for the Enterprise Manager, is now deprecated. See the chapter on database monitoring in the VoltDB Administrator's Guide for alternative ways to monitor your database.

17.6.

Additional improvements

In addition to the new features and capabilities described above, the following limitations in previous versions have been resolved:

  • It was possible in an XDCR environment for the replay of binary logs from one cluster to interfere with the local transactions on the other cluster, resulting in high latency. The application of binary logs has been tuned to reduce the impact on the local client workload.

  • There was a condition where if, after using database replication (DR) and one of the clusters stops and recovers, the cluster could fail with a ConcurrentModificationException exception. This condition was caused by the partitions used for DR changing while the cluster was down and the partition mapping from one cluster to the other being out of sync. This issue has been resolved.

  • Another rare condition related to database replication (DR) involved certain indexes with the columns in a particular order where, if one of the columns contained a null value and the record was updated or deleted, replication would stop. This issue has been resolved.

  • The sizing worksheet (available in the Schema tab of the VoltDB Management Center) was prone to overestimate the minimum size of large (greater than 63 bytes) VARCHAR and VARBINARY columns. This issue has been resolved.

  • Previously, the VoltDB planner would accept ambiguous column references, where a column name shared by two or more tables or aliases appeared without a prefix. This behavior has been corrected to comply with the SQL standard. References to ambiguous column names now must include a disambiguating prefix.

Known Limitations

The following are known limitations to the current release of VoltDB. Workarounds are suggested where applicable. However, it is important to note that these limitations are considered temporary and are likely to be corrected in future releases of the product.

1. Command Logging

1.1.

Changing the deployment configuration when recovering command logs, can result in unexpected settings.

There is an issue where, if the command log contains schema changes (performed through interactive DDL statements, voltadmin update, or @UpdateApplicationCatalog), when the command logs are recovered, the previous deployment file settings are used, even if an alternate deployment file is specified on the voltdb recover command line. Then, after recovering the database, a new schema update can result in the deployment settings specified on the command line taking affect.

Until this issue is resolved, the safest workaround to ensure the desired configuration is achieved is to perform the voltdb recover operation without modifying the current deployment file, then make deployment changes with the voltadmin update command after the database has started.

1.2.

Command logs can only be recovered to a cluster of the same size.

To ensure complete and accurate restoration of a database, recovery using command logs can only be performed to a cluster with the same number of unique partitions as the cluster that created the logs. If you restart and recover to the same cluster with the same deployment options, there is no problem. But if you change the deployment options for number of nodes, sites per host, or K-safety, recovery may not be possible.

For example, if a four node cluster is running with four sites per host and a K-safety value of one, the cluster has two copies of eight unique partitions (4 X 4 / 2). If one server fails, you cannot recover the command logs from the original cluster to a new cluster made up of the remaining three nodes, because the new cluster only has six unique partitions (3 X 4 / 2). You must either replace the failed server to reinstate the original hardware configuration or otherwise change the deployment options to match the number of unique partitions. (For example, increasing the site per host to eight and K-safety to two.)

1.3.

Do not use the subfolder name "segments" for the command log snapshot directory.

VoltDB reserves the subfolder "segments" under the command log directory for storing the actual command log files. Do not add, remove, or modify any files in this directory. In particular, do not set the command log snapshot directory to a subfolder "segments" of the command log directory, or else the server will hang on startup.

2. Database Replication

2.1.

Some DR data may not be delivered if master database nodes fail and rejoin in rapid succession.

Because DR data is buffered on the master database and then delivered asynchronously to the replica, there is always the danger that data does not reach the replica if a master node stops. This situation is mitigated in a K-safe environment by all copies of a partition buffering on the master cluster. Then if a sending node goes down, another node on the master database can take over sending logs to the replica. However, if multiple nodes go down and rejoin in rapid succession, it is possible that some buffered DR data — from transactions when one or more nodes were down — could be lost when another node with the last copy of that buffer also goes down.

If this occurs and the replica recognizes that some binary logs are missing, DR stops and must be restarted.

To avoid this situation, especially when cycling through nodes for maintenance purposes, the key is to ensure that all buffered DR data is transmitted before stopping the next node in the cycle. You can do this using the @Statistics system procedure to make sure the last ACKed timestamp (using @Statistitcs DR on the master cluster) is later than the timestamp when the previous node completed its rejoin operation.

2.2.

Avoid bulk data operations within a single transaction when using database replication

Bulk operations, such as large deletes, inserts, or updates are possible within a single stored procedure. However, if the binary logs generated for DR are larger than 45MB, the operation will fail. To avoid this situation, it is best to break up large bulk operations into multiple, smaller transactions. A general rule of thumb is to multiply the size of the table schema by the number of affected rows. For deletes and inserts, this value should be under 45MB to avoid exceeding the DR binary log size limit. For updates, this number should be under 22.5MB (because the binary log contains both the starting and ending row values for updates).

2.3.

Database replication ignores resource limits

There are a number of VoltDB features that help manage the database by constraining memory size and resource utilization. These features are extremely useful in avoiding crashes as a result of unexpected or unconstrained growth. However, these features could interfere with the normal operation of DR when passing data from one cluster to another, especially if the two clusters are different sizes. Therefore, as a general rule of thumb, DR overrides these features in favor of maintaining synchronization between the two clusters.

Specifically, DR ignores any resource monitor limits defined in the deployment file when applying binary logs on the consumer cluster. DR also ignores any partition row limits defined in the database schema when applying binary logs. This means, for example, if the replica database in passive DR has less memory or fewer unique partitions than the master, it is possible that applying binary logs of transactions that succeeded on the master could cause the replica to run out of memory. Note that these resource monitor and tables row limits are applied on any original transactions local to the cluster (for example, transactions on the master database in passive DR).

2.4.

Different cluster sizes can require additional Java heap

Database Replication (DR) now supports replication across clusters of different sizes. However, if the replica cluster is smaller than the master cluster, it may require a significantly larger Java heap setting. Specifically, if the replica has fewer unique partitions than the master, each partition on the replica must manage the incoming binary logs from more partitions on the master, which places additional pressure on the Java heap.

A simple rule of thumb is that the worst case scenario could require an additional P * R * 20MB space in the Java heap , where P is the number of sites per host on the replica server and R is the ratio of unique partitions on the master to partitions on the replica. For example, if the master cluster is 5 nodes with 10 sites per host and a K factor of 1 (i.e. 25 unique partitions) and the replica cluster is 3 nodes with 8 sites per host and a K factor of 1 (12 unique partitions), the Java heap on the replica cluster may require approximately 320MB of additional space in the heap:

Sites-per-host * master/replace ratio * 20MB
8 * 25/12 * 20 = ~ 320MB

An alternative is to reduce the size of the DR buffers on the master cluster by setting the DR_MEM_LIMIT Java property. For example, you can reduce the DR buffer size from the default 10MB to 5MB using the VOLTDB_OPTS environment variable before starting the master cluster.

$ export VOLTDB_OPTS="-DDR_MEM_LIMIT=5"

$ voltdb create

Changing the DR buffer limit on the master from 10MB to 5MB proportionally reduces the additional heap size needed. So in the previous example, the additional heap on the replica is reduced from 320MB to 160MB.

3. Cross Datacenter Replication (XDCR)

3.1.

Avoid replicating tables without a unique index.

Part of the replication process for XDCR is to verify that the record's starting and ending states match on both clusters, otherwise known as conflict resolution. To do that, XDCR must find the record first. Finding uniquely indexed records is efficient; finding non-unique records is not and can impact overall database performance.

To make you aware of possible performance impact, VoltDB issues a warning if you declare a table as a DR table and it does not have a unique index.

3.2.

When starting XDCR for the first time, only one database can contain data.

You cannot start XDCR if both databases already have data in the DR tables. Only one of the two participating databases can have preexisting data when DR starts for the first time.

3.3.

During the initial synchronization of existing data, the receiving database is paused.

When starting XDCR for the first time, where one database already contains data, a snapshot of that data is sent to the other database. While receiving and processing that snapshot, the receiving database is paused. That is, it is in read-only mode. Once the snapshot is completed and the two database are synchronized, the receiving database is automatically unpaused, resuming normal read/write operations.

3.4.

A large number of multi-partition write transactions may interfere with the ability to restart XDCR after a cluster stops and recovers.

Normally, XDCR will automatically restart where it left off after one of the clusters stops and recovers from its command logs (using the voltdb recover command). However, if the workload is predominantly multi-partition write transactions, a failed cluster may not be able to restart XDCR after it recovers. In this case, XDCR must be restarted from scratch, using the content from one of the clusters as the source for synchronizing and recreating the other cluster (using the voltdb create --force command) without any content in the DR tables.

3.5.

A TRUNCATE TABLE transaction will be reported as a conflict with any other write operation to the same table.

When using XDCR, if the binary log from one cluster includes a TRUNCATE TABLE statement and the other cluster performs any write operation to the same table before the binary log is processed, the TRUNCATE TABLE operation will be reported as a conflict. Note that currently DELETE operations always supercede other actions, so the TRUNCATE TABLE will be executed on both clusters.

3.6.

Exceeding a LIMIT PARTITION ROWS constraint can generate multiple conflicts

It is possible to place a limit on the number of rows that any partition can hold for a specific table using the LIMIT PARTITION ROWS clause of the CREATE TABLE statement. When close to the limit, transactions on either or both clusters can exceed the limit simultaneously, resulting in a potentially large number of delete operations that then generate conflicts when the the associated binary log reaches the other cluster.

3.7.

Use of the VoltProcedure.getUniqueId method is unique to a cluster, not across clusters.

VoltDB provides a way to generate a deterministically unique ID within a stored procedure using the getUniqueId method. This method guarantees uniqueness within the current cluster. However, the method could generate the same ID on two distinct database clusters. Consequently, when using XDCR, you should combine the return values of VoltProcedure.getUniqueId with VoltProcedure.getClusterId, which returns the current cluster's unique DR ID, to generate IDs that are unique across all clusters in your environment.

4. Export

4.1.

Synchronous export in Kafka can use up all available file descriptors and crash the database.

A bug in the Apache Kafka client can result in file descriptors being allocated but not released if the producer.type attribute is set to "sync" (which is the default). The consequence is that the system eventually runs out of file descriptors and the VoltDB server process will crash.

Until this bug is fixed, use of synchronous Kafka export is not recommended. The workaround is to set the Kafka producer.type attribute to "async" using the VoltDB export properties.

5. Import

5.1.

Data may be lost if a Kafka broker stops during import.

If, while Kafka import is enabled, the Kafka broker that VoltDB is connected to stops (for example, if the server crashes or is taken down for maintenance), some messages may be lost between Kafka and VoltDB. To ensure no data is lost, we recommend you disable VoltDB import before taking down the associated Kafka broker. You can then re-enable import after the Kafka broker comes back online.

5.2.

Kafka import may be reset, resulting in duplicate entries.

There is an issue with Kafka and the VoltDB Kafka importer where the current pointer in the Kafka queue gets reset to zero. The consequence of this event is that items in the queue get imported a second time resulting in duplicate entries. This issue will be addressed in an upcoming release. In the meantime, if you are using the Kafka importer, contact [email protected] for details.

6. SQL and Stored Procedures

6.1.

Comments containing unmatched single quotes in multi-line statements can produce unexpected results.

When entering a multi-line statement at the sqlcmd prompt, if a line ends in a comment (indicated by two hyphens) and the comment contains an unmatched single quote character, the following lines of input are not interpreted correctly. Specifically, the comment is incorrectly interpreted as continuing until the next single quote character or a closing semi-colon is read. This is most likely to happen when reading in a schema file containing comments. This issue is specific to the sqlcmd utility.

A fix for this condition is planned for an upcoming point release

6.2.

Do not use assertions in VoltDB stored procedures.

VoltDB currently intercepts assertions as part of its handling of stored procedures. Attempts to use assertions in stored procedures for debugging or to find programmatic errors will not work as expected.

6.3.

The UPPER() and LOWER() functions currently convert ASCII characters only.

The UPPER() and LOWER() functions return a string converted to all uppercase or all lowercase letters, respectively. However, for the initial release, these functions only operate on characters in the ASCII character set. Other case-sensitive UTF-8 characters in the string are returned unchanged. Support for all case-sensitive UTF-8 characters will be included in a future release.

7. Client Interfaces

7.1.

Avoid using decimal datatypes with the C++ client interface on 32-bit platforms.

There is a problem with how the math library used to build the C++ client library handles large decimal values on 32-bit operating systems. As a result, the C++ library cannot serialize and pass Decimal datatypes reliably on these systems.

Note that the C++ client interface can send and receive Decimal values properly on 64-bit platforms.

Implementation Notes

The following notes provide details concerning how certain VoltDB features operate. The behavior is not considered incorrect. However, this information can be important when using specific components of the VoltDB product.

1. VoltDB Management Center

1.1.

Schema updates clear the stored procedure data table in the Management Center Monitor section

Any time the database schema or stored procedures are changed, the data table showing stored procedure statistics at the bottom of the Monitor section of the VOltDB Management Center get reset. As soon as new invocations of the stored procedures occur, the statistics table will show new values based on performance after the schema update. Until invocations occur, the procedure table is blank.

2. SQL

2.1.

You cannot partition a table on a column defined as ASSUMEUNIQUE.

The ASSUMEUNIQUE attribute is designed for identifying columns in partitioned tables where the column values are known to be unique but the table is not partitioned on that column, so VoltDB cannot verify complete uniqueness across the database. Using interactive DDL, you can create a table with a column marked as ASSUMEUNIQUE, but if you try to partition the table on the ASSUMEUNIQUE column, you receive an error. The solution is to drop and add the column using the UNIQUE attribute instead of ASSUMEUNIQUE.

2.2.

Adding or dropping column constraints (UNIQUE or ASSUMEUNIQUE) is not supported by the ALTER TABLE ALTER COLUMN statement.

You cannot add or remove a column constraint such as UNIQUE or ASSUMEUNIQUE using the ALTER TABLE ALTER COLUMN statement. Instead to add or remove such constraints, you must first drop then add the modified column. For example:

ALTER TABLE employee DROP COLUMN empID;
ALTER TABLE employee ADD COLUMN empID INTEGER UNIQUE;

2.3.

Do not use UPDATE to change the value of a partitioning column

For partitioned tables, the value of the column used to partition the table determines what partition the row belongs to. If you use UPDATE to change this value and the new value belongs in a different partition, the UPDATE request will fail and the stored procedure will be rolled back.

Updating the partition column value may or may not cause the record to be repartitioned (depending on the old and new values). However, since you cannot determine if the update will succeed or fail, you should not use UPDATE to change the value of partitioning columns.

The workaround, if you must change the value of the partitioning column, is to use both a DELETE and an INSERT statement to explicitly remove and then re-insert the desired rows.

2.4.

Certain SQL syntax errors result in the error message "user lacks privilege or object not found".

If you refer to a table or column name that does not exist, VoltDB reports that the "user lacks privilege or object not found". This can happen, for example, if you misspell a table or column name.

Another situation where this occurs is if you mistakenly use double quotation marks to enclose a string literal (such as WHERE ColumnA="True"). ANSI SQL requires single quotes for string literals and reserves double quotes for object names. In the preceding example, VoltDB interprets "True" as an object name, cannot resolve it, and issues the "user lacks privilege" error.

The workaround is, if you receive this error, to look for misspelled table or columns names or string literals delimited by double quotes in the offending SQL statement.

2.5.

Ambiguous column references no longer allowed.

Starting with VoltDB 6.0, ambiguous column references are no longer allowed. For example, if both the Customer and Placedorder tables have a column named Address, the reference to Address in the following SELECT statement is ambiguous:

SELECT OrderNumber, Address FROM Customer, Placedorder
   . . .

Previously, VoltDB would select the column from the leftmost table (Customer, in this case). Ambiguous column references are no longer allowed and you must use table prefixes to disambiguate identical column names. For example, specifying the column in the preceding statement as Customer.Address.

A corollary to this change is that a column declared in a USING clause can now be referenced using a prefix. For example, the following statement uses the prefix Customer.Address to disambiguate the column selection from a possibly similarly named column belonging to the Supplier table:

SELECT OrderNumber, Vendor, Customer.Address
   FROM Customer, Placedorder Using (Address), Supplier
    . . .
3. Runtime

3.1.

File Descriptor Limits

VoltDB opens a file descriptor for every client connection to the database. In normal operation, this use of file descriptors is transparent to the user. However, if there are an inordinate number of concurrent client connections, or clients open and close many connections in rapid succession, it is possible for VoltDB to exceed the process limit on file descriptors. When this happens, new connections may be rejected or other disk-based activities (such as snapshotting) may be disrupted.

In environments where there are likely to be an extremely large number of connections, you should consider increasing the operating system's per-process limit on file descriptors.