HSQL Guide
HSQL Guide
Table of Contents
Preface ........................................................................................................................................ xiii
Available formats for this document ......................................................................................... xiii
1. Running and Using HyperSQL ....................................................................................................... 1
Introduction ............................................................................................................................. 1
The HSQLDB Jar .................................................................................................................... 1
Running Database Access Tools ................................................................................................. 2
A HyperSQL Database .............................................................................................................. 2
In-Process Access to Database Catalogs ....................................................................................... 3
Server Modes .......................................................................................................................... 4
HyperSQL HSQL Server ................................................................................................... 4
HyperSQL HTTP Server ................................................................................................... 5
HyperSQL HTTP Servlet ................................................................................................... 5
Connecting to a Database Server ......................................................................................... 5
Security Considerations ..................................................................................................... 6
Using Multiple Databases .................................................................................................. 6
Accessing the Data ................................................................................................................... 6
Closing the Database ................................................................................................................ 7
Creating a New Database .......................................................................................................... 7
2. SQL Language ............................................................................................................................. 9
Standards Support .................................................................................................................... 9
SQL Data and Tables ............................................................................................................... 9
Temporary Tables ........................................................................................................... 10
Persistent Tables ............................................................................................................ 10
Short Guide to Data Types ....................................................................................................... 11
Data Types and Operations ...................................................................................................... 12
Numeric Types .............................................................................................................. 12
Boolean Type ................................................................................................................ 14
Character String Types .................................................................................................... 14
Binary String Types ........................................................................................................ 15
Bit String Types ............................................................................................................. 16
Lob Data ...................................................................................................................... 17
Storage and Handling of Java Objects ................................................................................ 17
Type Length, Precision and Scale ...................................................................................... 18
Datetime types ....................................................................................................................... 19
Interval Types ........................................................................................................................ 22
Arrays .................................................................................................................................. 25
Array Definition ............................................................................................................. 25
Array Reference ............................................................................................................. 27
Array Operations ............................................................................................................ 27
Indexes and Query Speed ......................................................................................................... 29
Query Processing and Optimisation ........................................................................................... 30
Indexes and Conditions ................................................................................................... 30
Indexes and Operations ................................................................................................... 31
Indexes and ORDER BY, OFFSET and LIMIT .................................................................... 31
3. Sessions and Transactions ............................................................................................................ 33
Overview .............................................................................................................................. 33
Session Attributes and Variables ............................................................................................... 33
Session Attributes ........................................................................................................... 34
Session Variables ........................................................................................................... 34
Session Tables ............................................................................................................... 34
Transactions and Concurrency Control ....................................................................................... 35
iii
iv
35
36
36
36
36
37
38
38
38
39
46
46
46
47
47
48
49
49
49
51
52
52
53
53
54
54
54
54
54
56
56
57
58
63
68
69
70
72
74
76
76
80
80
80
80
80
81
81
81
81
88
88
88
88
vi
141
141
141
142
143
144
146
147
148
150
152
152
153
154
154
155
155
156
157
158
158
159
159
160
161
162
162
163
164
165
166
167
168
169
170
170
171
171
171
172
173
175
175
175
176
176
176
176
176
177
177
177
177
178
vii
179
182
182
183
189
193
193
194
195
197
200
203
204
206
210
210
210
210
211
211
212
212
213
213
214
214
214
215
215
215
215
215
216
216
216
216
217
217
217
217
217
218
218
219
219
221
224
232
236
238
238
238
239
241
13.
14.
15.
16.
viii
241
241
242
242
243
243
244
244
247
248
252
253
259
260
261
261
261
261
262
262
264
264
264
265
265
265
265
267
268
268
270
270
270
272
273
277
277
277
278
278
282
283
284
284
284
284
285
285
285
286
286
286
287
288
ix
288
288
288
289
289
289
290
290
290
291
292
293
293
293
295
297
297
298
299
300
300
300
300
303
304
306
306
306
307
307
308
309
311
311
311
311
312
314
320
List of Tables
1. Available formats of this document ...............................................................................................
10.1. TO_CHAR, TO_DATE and TO_TIMESTAMP format elements ....................................................
13.1. Memory Database URL ..........................................................................................................
13.2. File Database URL ................................................................................................................
13.3. Resource Database URL .........................................................................................................
13.4. Server Database URL ............................................................................................................
13.5. User and Password ................................................................................................................
13.6. Closing old ResultSet when Statement is reused ..........................................................................
13.7. Column Names in JDBC ResultSet ...........................................................................................
13.8. Empty batch in JDBC PreparedStatement ..................................................................................
13.9. Creating New Database ..........................................................................................................
13.10. Automatic Shutdown ............................................................................................................
13.11. Validity Check Property .......................................................................................................
13.12. SQL Keyword Use as Identifier .............................................................................................
13.13. SQL Keyword Starting with the Underscore or Containing Dollar Characters ..................................
13.14. Reference to Columns Names ................................................................................................
13.15. String Size Declaration .........................................................................................................
13.16. Type Enforcement in Comparison and Assignment ....................................................................
13.17. Foreign Key Triggered Data Change .......................................................................................
13.18. Use of LOB for LONGVAR Types ........................................................................................
13.19. Type of string literals in CASE WHEN ...................................................................................
13.20. Concatenation with NULL ....................................................................................................
13.21. NULL in Multi-Column UNIQUE Constraints ..........................................................................
13.22. Truncation or Rounding in Type Conversion ............................................................................
13.23. Decimal Scale of Division and AVG Values .............................................................................
13.24. Support for NaN values ........................................................................................................
13.25. Sort order of NULL values ...................................................................................................
13.26. Sort order of NULL values with DESC ...................................................................................
13.27. String comparison with padding .............................................................................................
13.28. Case Insensitive Varchar columns ...........................................................................................
13.29. Storage of Live Java Objects .................................................................................................
13.30. DB2 Style Syntax ................................................................................................................
13.31. MSSQL Style Syntax ...........................................................................................................
13.32. MySQL Style Syntax ...........................................................................................................
13.33. Oracle Style Syntax .............................................................................................................
13.34. PostgreSQL Style Syntax ......................................................................................................
13.35. Default Table Type ..............................................................................................................
13.36. Transaction Control Mode .....................................................................................................
13.37. Default Isolation Level for Sessions ........................................................................................
13.38. Transaction Rollback in Deadlock ..........................................................................................
13.39. Time Zone and Interval Types ...............................................................................................
13.40. Opening Database as Read Only ............................................................................................
13.41. Opening Database Without Modifying the Files ........................................................................
13.42. Huge database files and tables ...............................................................................................
13.43. Event Logging ....................................................................................................................
13.44. SQL Logging ......................................................................................................................
13.45. Temporary Result Rows in Memory .......................................................................................
13.46. Rows Cached In Memory .....................................................................................................
13.47. Rows Cached In Memory .....................................................................................................
13.48. Size of Rows Cached in Memory ...........................................................................................
13.49. Size Scale of Disk Table Storage ...........................................................................................
xiii
202
243
243
243
244
245
245
245
246
246
246
247
248
248
248
248
248
249
249
249
249
250
250
250
250
250
251
251
251
251
251
252
252
252
252
252
253
253
253
253
253
254
254
254
254
255
255
255
255
256
xi
256
256
256
256
257
257
257
257
257
258
258
258
258
258
259
259
259
259
259
260
260
260
262
263
263
List of Examples
1.1. Java code to connect to the local hsql Server .................................................................................. 5
1.2. Java code to connect to the local http Server ................................................................................... 5
1.3. Java code to connect to the local secure SSL hsql and http Servers ...................................................... 6
1.4. specifying a connection property to shutdown the database when the last connection is closed ................... 7
1.5. specifying a connection property to disallow creating a new database ................................................... 8
3.1. User-defined Session Variables ................................................................................................... 34
3.2. User-defined Temporary Session Tables ....................................................................................... 34
3.3. Setting Transaction Characteristics .............................................................................................. 40
3.4. Locking Tables ........................................................................................................................ 41
3.5. Rollback ................................................................................................................................. 42
3.6. Setting Session Characteristics .................................................................................................... 42
3.7. Setting Session Authorization ..................................................................................................... 43
3.8. Setting Session Time Zone ......................................................................................................... 43
4.1. inserting the next sequence value into a table row .......................................................................... 49
4.2. numbering returned rows of a SELECT in sequential order .............................................................. 50
4.3. using the last value of a sequence ............................................................................................... 50
4.4. Column values which satisfy a 2-column UNIQUE constraint ........................................................... 53
11.1. Using CACHED tables for the LOB schema .............................................................................. 212
11.2. Displaying DbBackup Syntax .................................................................................................. 214
11.3. Offline Backup Example ........................................................................................................ 215
11.4. Listing a Backup with DbBackup ............................................................................................. 215
11.5. Restoring a Backup with DbBackup ......................................................................................... 215
11.6. SQL Log Example ................................................................................................................ 222
11.7. Finding foreign key rows with no parents after a bulk import ........................................................ 232
14.1. Exporting certificate from the server's keystore ........................................................................... 266
14.2. Adding a certificate to the client keystore .................................................................................. 266
14.3. Specifying your own trust store to a JDBC client ........................................................................ 266
14.4. Getting a pem-style private key into a JKS keystore .................................................................... 267
14.5. Validating and Testing an ACL file .......................................................................................... 269
15.1. example sqltool.rc stanza ........................................................................................................ 279
16.1. Using CACHED tables for the LOB schema .............................................................................. 286
16.2. MainInvoker Example ............................................................................................................ 288
16.3. HyperSQL Snapshot Repository Definition ................................................................................ 294
16.4. Sample Snapshot Ivy Dependency ............................................................................................ 294
16.5. Sample Snapshot Maven Dependency ....................................................................................... 294
16.6. Sample Snapshot Gradle Dependency ....................................................................................... 294
16.7. Sample Snapshot ivy.xml loaded by Ivyxml plugin ...................................................................... 295
16.8. Sample Snapshot Groovy Dependency, using Grape .................................................................... 295
16.9. Sample Range Ivy Dependency ............................................................................................... 295
16.10. Sample Range Maven Dependency ......................................................................................... 295
16.11. Sample Range Gradle Dependency ......................................................................................... 295
16.12. Sample Range ivy.xml loaded by Ivyxml plugin ........................................................................ 296
16.13. Sample Range Groovy Dependency, using Grape ...................................................................... 296
B.1. Buiding the standard Hsqldb jar file with Ant .............................................................................. 307
B.2. Example source code before CodeSwitcher is run ......................................................................... 308
B.3. CodeSwitcher command line invocation ..................................................................................... 308
B.4. Source code after CodeSwitcher processing ................................................................................. 308
xii
Preface
HyperSQL DataBase (HSQLDB) is a modern relational database manager that conforms closely to the SQL:2011
Standard and JDBC 4 specifications. It supports all core features and many of the optional features of SQL:2008.
The first versions of HSQLDB were released in 2001. Version 2, first released in 2010, includes a complete rewrite
of most parts of the database engine.
This documentation covers HyperSQL version 2.3.4. This documentation is regularly improved and updated. The
latest, updated version can be found at http://hsqldb.org/doc/2.0/
If you notice any mistakes in this document, or if you have problems with the procedures themselves, please use the
HSQLDB support facilities which are listed at http://hsqldb.org/support
your distro
at http://hsqldb.org/doc/2.0
Chunked HTML
index.html
http://hsqldb.org/doc/2.0/guide/
All-in-one HTML
guide.html
http://hsqldb.org/doc/2.0/guide/guide.html
guide.pdf
http://hsqldb.org/doc/2.0/guide/guide.pdf
If you are reading this document now with a standalone PDF reader, the your distro links may not work.
xiii
Introduction
HyperSQL Database (HSQLDB ) is a modern relational database system. Version 2.3 is the latest release of the all-new
version 2 code. Written from ground up to follow the international ISO SQL:2011 standard, it supports the complete
set of the classic features, together with optional features such as stored procedures and triggers.
HyperSQL is used for development, testing and deployment of database applications.
Standard compliance is the most unique characteristic of HyperSQL. There are several other distinctive features.
HyperSQL can provide database access within the user's application process, within an application server, or as a
separate server process. HyperSQL can run entirely in memory using dedicated fast memory structures as opposed
to ram disk. HyperSQL can use disk persistence in a flexible way, with reliable crash-recovery. HyperSQL is the
only open-source relational database management system with a high performance dedicated lob storage system,
suitable for gigabytes of lob data. It is also the only relational database that can create and access large comma
delimited files as SQL tables. HyperSQL supports three live switchable transaction control models, including fully
multi threaded MVCC, and is suitable for high performance transaction processing applications. HyperSQL is also
suitable for business intelligence, ETL and other applications that process large data sets. HyperSQL has a wide range
of enterprise deployment options, such as XA transactions, connection pooling data sources and remote authentication.
New SQL syntax compatibility modes have been added to HyperSQL. These modes allow a high degree of
compatibility with several other database systems which use non-standard SQL syntax.
HyperSQL is written in the Java programming language and runs in a Java virtual machine (JVM). It supports the
JDBC interface for database access.
An ODBC driver is also available as a separate download.
This guide covers the database engine features, SQL syntax and different modes of operation. The Server,
JDBC interfaces, pooling and XA components are documented in the JavaDoc. Utilities such as SqlTool and
DatabaseManager are covered in a separate Utilities Guide.
Database Manager (GUI database access tool, with Swing and AWT versions)
The HyperSQL RDBMS and JDBC Driver provide the core functionality. DatabaseManagers are general-purpose
database access tools that can be used with any database engine that has a JDBC driver.
An additional jar, sqltool.jar, contains Sql Tool, command line database access tool. This is a general purpose command
line database access tool that can be ued with other database engines as well.
A HyperSQL Database
Each HyperSQL database is called a catalog. There are three types of catalog depending on how the data is stored.
test.log
test.data
test.backup
test.lobs
The properties file contains a few settings about the database. The script file contains the definition of tables and other
database objects, plus the data for non-cached tables. The log file contains recent changes to the database. The data
file contains the data for cached tables and the backup file is a compressed backup of the last known consistent state
of the data file. All these files are essential and should never be deleted. For some catalogs, the test.data and
test.backup files will not be present. In addition to those files, a HyperSQL database may link to any formatted
text files, such as CSV lists, anywhere on the disk.
While the "test" catalog is open, a test.log file is used to write the changes made to data. This file is removed at
a normal SHUTDOWN. Otherwise (with abnormal shutdown) this file is used at the next startup to redo the changes.
A test.lck file is also used to record the fact that the database is open. This is deleted at a normal SHUTDOWN.
Note
When the engine closes the database at a shutdown, it creates temporary files with the extension .new
which it then renames to those listed above. These files should not be deleted by the user. At the time of
the next startup, all such files will be renamed or deleted by the database engine. In some circumstances,
a test.data.xxx.old is created and deleted afterwards by the database engine. The user can delete
these test.data.xxx.old files.
A res: catalog consists of the files for a small, read-only database that can be stored inside a Java resource such as a
ZIP or JAR archive and distributed as part of a Java application program.
The database file path format can be specified using forward slashes in Windows hosts as well as Linux hosts. So
relative paths or paths that refer to the same directory on the same drive can be identical. For example if your database
directory in Linux is /opt/db/ containing a database testdb (with files named testdb.*),
then the database file path is /opt/db/testdb. If you create an identical directory structure on
the C: drive of a Windows host, you can use the same URL in both Windows and Linux:
Connection c = DriverManager.getConnection("jdbc:hsqldb:file:/opt/db/testdb", "SA", "");
When using relative paths, these paths will be taken relative to the directory in which the shell command to start the
Java Virtual Machine was executed. Refer to the Javadoc for JDBCConnection for more details.
Paths and database names for file databases are treated as case-sensitive when the database is created or the first
connection is made to the database. But if a second connection is made to an open database, using a path and name
that differs only in case, then the connection is made to the existing open database. This measure is necessary because
in Windows the two paths are equivalent.
A mem: database is specified by the mem: protocol. For mem: databases, the path is simply a name. Several mem:
databases can exist at the same time and distinguished by their names. In the example below, the database is called
"mymemdb":
Connection c = DriverManager.getConnection("jdbc:hsqldb:mem:mymemdb", "SA", "");
A res: database, is specified by the res: protocol. As it is a Java resource, the database path is a Java URL (similar to the
path to a class). In the example below, "resdb" is the root name of the database files, which exists in the directory "org/
my/path" within the classpath (probably in a Jar). A Java resource is stored in a compressed format and is decompressed
in memory when it is used. For this reason, a res: database should not contain large amounts of data and is always
read-only.
Connection c = DriverManager.getConnection("jdbc:hsqldb:res:org.my.path.resdb", "SA", "");
The first time in-process connection is made to a database, some general data structures are initialised and a few helper
threads are started. After this, creation of connections and calls to JDBC methods of the connections execute as if they
are part of the Java application that is making the calls. When the SQL command "SHUTDOWN" is executed, the
global structures and helper threads for the database are destroyed.
Note that only one Java process at a time can make in-process connections to a given file: database. However, if the
file: database has been made read-only, or if connections are made to a res: database, then it is possible to make inprocess connections from multiple Java processes.
Server Modes
For most applications, in-process access is faster, as the data is not converted and sent over the network. The main
drawback is that it is not possible by default to connect to the database from outside your application. As a result
you cannot check the contents of the database with external tools such as Database Manager while your application
is running.
Server modes provide the maximum accessibility. The database engine runs in a JVM and opens one or more inprocess catalogs. It listens for connections from programs on the same computer or other computers on the network.
It translates these connections into in-process connections to the databases.
Several different programs can connect to the server and retrieve or update information. Applications programs (clients)
connect to the server using the HyperSQL JDBC driver. In most server modes, the server can serve an unlimited number
of databases that are specified at the time of running the server, or optionally, as a connection request is received.
A Sever mode is also the preferred mode of running the database during development. It allows you to query the
database from a separate database access utility while your application is running.
There are three server modes, based on the protocol used for communications between the client and server. They are
briefly discussed below. More details on servers is provided in the HyperSQL Network Listeners (Servers) chapter.
The command line argument --help can be used to get a list of available arguments.
The command line argument --help can be used to get a list of available arguments.
If the HyperSQL HTTP server is used, the protocol is http: and the URL will be different:
Note in the above connection URL, there is no mention of the database file, as this was specified when running the
server. Instead, the public name defined for dbname.0 is used. Also, see the HyperSQL Network Listeners (Servers)
chapter for the connection URL when there is more than one database per server instance.
Security Considerations
When a HyperSQL server is run, network access should be adequately protected. Source IP addresses may be restricted
by use of our Access Control List feature , network filtering software, firewall software, or standalone firewalls. Only
secure passwords should be used-- most importantly, the password for the default system user should be changed
from the default empty string. If you are purposefully providing data to the public, then the wide-open public network
connection should be used exclusively to access the public data via read-only accounts. (i.e., neither secure data nor
privileged accounts should use this connection). These considerations also apply to HyperSQL servers run with the
HTTP protocol.
HyperSQL provides two optional security mechanisms. The encrypted SSL protocol , and Access Control Lists .
Both mechanisms can be specified when running the Server or WebServer. On the client, the URL to connect to an
SSL server is slightly different:
Example 1.3. Java code to connect to the local secure SSL hsql and http Servers
Connection c = DriverManager.getConnection("jdbc:hsqldb:hsqls://localhost/xdb", "SA", "");
Connection c = DriverManager.getConnection("jdbc:hsqldb:https://localhost/xdb", "SA", "");
The security features are discussed in detail in the HyperSQL Network Listeners (Servers) chapter.
change
statements.
Example 1.4. specifying a connection property to shutdown the database when the last
connection is closed
Connection c = DriverManager.getConnection(
"jdbc:hsqldb:file:/opt/db/testdb;shutdown=true", "SA", "");
This feature is useful for running tests, where it may not be practical to shutdown the database after each test. But it
is not recommended for application programs.
can specify a connection property ifexists=true to allow connection to an existing database only and avoid creating
a new database. In this case, if the database does not exist, the getConnection() method will throw an exception.
A database has many optional properties, described in the System Management chapter. You can specify most of
these properties on the URL or in the connection properties for the first connection that creates the database. See the
Properties chapter.
Standards Support
HyperSQL 2.x supports the dialect of SQL defined by SQL standards 92, 1999, 2003, 2008 and 2011. This means
where a feature of the standard is supported, e.g. left outer join, the syntax is that specified by the standard text. Almost
all syntactic features of SQL-92 up to Advanced Level are supported, as well as SQL:2011 core and many optional
features of this standard. Work is in progress for a formal declaration of conformance.
At the time of this release, HyperSQL supports the widest range of SQL standard features among all open source
RDBMS.
Various chapters of this guide list the supported syntax. When writing or converting existing SQL DDL (Data
Definition Language), DML (Data Manipulation Language) or DQL (Data Query Language) statements for HSQLDB,
you should consult the supported syntax and modify the statements accordingly. Some statements written for older
versions may have to be modified.
Over 300 words are reserved by the standard and should not be used as table or column names. For example, the
word POSITION is reserved as it is a function defined by the Standards with a similar role as String.indexOf()
in Java. HyperSQL does not currently prevent you from using a reserved word if it does not support its use or can
distinguish it. For example CUBE is a reserved words that is not currently supported by HyperSQL and is allowed as
a table or column name. You should avoid using such names as future versions of HyperSQL are likely to support the
reserved words and may reject your table definitions or queries. The full list of SQL reserved words is in the appendix
Lists of Keywords .
There are several user-defined properties to control the strict application of the SQL Standard in different areas.
If you have to use a reserved keyword as the name of a database object, you can enclose it in double quotes.
HyperSQL also supports enhancements with keywords and expressions that are not part of the SQL standard.
Expressions such as SELECT TOP 5 FROM .., SELECT LIMIT 0 10 FROM ... or DROP TABLE mytable
IF EXISTS are among such constructs.
Many print books cover SQL Standard syntax and can be consulted.
In HyperSQL version 2, all features of JDBC4 that apply to the capabilities of HSQLDB are fully supported. The
relevant JDBC classes are thoroughly documented with additional clarifications and HyperSQL specific comments.
See the JavaDoc for the org.hsqldb.jdbc.* classes.
SQL Language
Temporary Tables
Data in TEMPORARY tables is not saved and lasts only for the lifetime of the session. The contents of each TEMP
table is visible only from the session that is used to populate it.
HyperSQL supports two types of temporary tables.
The GLOBAL TEMPORARY type is a schema object. It is created with the CREATE GLOBAL TEMPORARY TABLE
statement. The definition of the table persists, and each session has access to the table. But each session sees its own
copy of the table, which is empty at the beginning of the session.
The LOCAL TEMPORARY type is not a schema object. It is created with the DECLARE LOCAL TEMPORARY TABLE
statement. The table definition lasts only for the duration of the session and is not persisted in the database. The table
can be declared in the middle of a transaction without committing the transaction. If a schema name is needed to
reference these tables in a given SQL statement, the pseudo schema names MODULE or SESSION can be used.
When the session commits, the contents of all temporary tables are cleared by default. If the table definition statements
includes ON COMMIT PRESERVE ROWS, then the contents are kept when a commit takes place.
The rows in temporary tables are stored in memory by default. If the hsqldb.result_max_memory_rows
property has been set or the SET SESSION RESULT MEMORY ROWS <row count> has been specified, tables
with row count above the setting are stored on disk.
Persistent Tables
HyperSQL supports the Standard definition of persistent base table, but defines three types according to the way the
data is stored. These are MEMORY tables, CACHED tables and TEXT tables.
Memory tables are the default type when the CREATE TABLE command is used. Their data is held entirely in memory
but any change to their structure or contents is written to the *.log and *.script files. The *.script file and
the *.log file are read the next time the database is opened, and the MEMORY tables are recreated with all their
contents. So unlike TEMPORARY tables, MEMORY tables are persistent. When the database is opened, all the data
for the memory tables is read and inserted. This process may take a long time if the database is larger than tens of
megabytes. When the database is shutdown, all the data is saved. This can also take a long time.
CACHED tables are created with the CREATE CACHED TABLE command. Only part of their data or indexes is
held in memory, allowing large tables that would otherwise take up to several hundred megabytes of memory. Another
advantage of cached tables is that the database engine takes less time to start up when a cached table is used for large
amounts of data. The disadvantage of cached tables is a reduction in speed. Do not use cached tables if your data
set is relatively small. In an application with some small tables and some large ones, it is better to use the default,
MEMORY mode for the small tables.
TEXT tables use a CSV (Comma Separated Value) or other delimited text file as the source of their data. You can
specify an existing CSV file, such as a dump from another database or program, as the source of a TEXT table.
Alternatively, you can specify an empty file to be filled with data by the database engine. TEXT tables are efficient in
memory usage as they cache only part of the text data and all of the indexes. The Text table data source can always
be reassigned to a different file if necessary. The commands are needed to set up a TEXT table as detailed in the Text
Tables chapter.
With all-in-memory databases, both MEMORY table and CACHED table declarations are treated as declarations for
non-persistent memory tables. In the latest versions of HyperSQL, TEXT table declarations are allowed in all-inmemory databases.
The default type of tables resulting from future CREATE TABLE statements can be specified with the SQL command:
SET DATABASE DEFAULT TABLE TYPE { CACHED | MEMORY };
10
SQL Language
The type of an existing table can be changed with the SQL command:
SET TABLE <table name> TYPE { CACHED | MEMORY };
SQL statements access different types of tables uniformly. No change to statements is needed to access different types
of table.
11
SQL Language
in normal operations with disk based databases. For specialised applications, use ARRAY with as many elements
as your memory allocation can support.
HyperSQL 2.3 has several compatibility modes which allow the type names that are used by other RDBMS to be
accepted and translated into the closest SQL Standard type. For example the type TEXT, supported by MySQL and
PostgreSQL is translated in these compatibility modes.
Numeric Types
TINYINT, SMALLINT, INTEGER, BIGINT, NUMERIC and DECIMAL (without a decimal point) are the supported
integral types. They correspond respectively to byte, short, int, long, BigDecimal and BigDecimal Java
types in the range of values that they can represent (NUMERIC and DECIMAL are equivalent). The type TINYINT
is an HSQLDB extension to the SQL Standard, while the others conform to the Standard definition. The SQL type
dictates the maximum and minimum values that can be held in a field of each type. For example the value range for
TINYINT is -128 to +127. The bit precision of TINYINT, SMALLINT, INTEGER and BIGINT is respectively 8, 16,
32 and 64. For NUMERIC and DECIMAL, decimal precision is used.
DECIMAL and NUMERIC with decimal fractions are mapped to java.math.BigDecimal and can have very
large numbers of digits. In HyperSQL the two types are equivalent. These types, together with integral types, are called
exact numeric types.
In HyperSQL, REAL, FLOAT, DOUBLE are equivalent and all mapped to double in Java. These types are defined
by the SQL Standard as approximate numeric types. The bit-precision of all these types is 64 bits.
The decimal precision and scale of NUMERIC and DECIMAL types can be optionally defined. For example,
DECIMAL(10,2) means maximum total number of digits is 10 and there are always 2 digits after the decimal point,
while DECIMAL(10) means 10 digits without a decimal point. The bit-precision of FLOAT can be defined but it is
ignored and the default bit-precision of 64 is used. The default precision of NUMERIC and DECIMAL (when not
defined) is 100.
Note: If a database has been set to ignore type precision limits with the SET DATABASE SQL SIZE FALSE command,
then a type definition of DECIMAL with no precision and scale is treated as DECIMAL(100,10). In normal operation,
it is treated as DECIMAL(100).
12
SQL Language
Integral Types
In expressions, TINYINT, SMALLINT, INTEGER, BIGINT, NUMERIC and DECIMAL (without a decimal point)
can be freely combined and no data narrowing takes place. The resulting value is of a type that can support all possible
values.
If the SELECT statement refers to a simple column or function, then the return type is the type corresponding to the
column or the return type of the function. For example:
CREATE TABLE t(a INTEGER, b BIGINT);
SELECT MAX(a), MAX(b) FROM t;
will return a ResultSet where the type of the first column is java.lang.Integer and the second column is
java.lang.Long. However,
SELECT MAX(a) + 1, MAX(b) + 1 FROM t;
will return java.lang.Long and BigDecimal values, generated as a result of uniform type promotion for all the
return values. Note that type promotion to BigDecimal ensures the correct value is returned if MAX(b) evaluates
to Long.MAX_VALUE.
There is no built-in limit on the size of intermediate integral values in expressions. As a result, you should check for
the type of the ResultSet column and choose an appropriate getXXXX() method to retrieve it. Alternatively, you
can use the getObject() method, then cast the result to java.lang.Number and use the intValue() or
longValue() methods on the result.
When the result of an expression is stored in a column of a database table, it has to fit in the target column, otherwise an
error is returned. For example when 1234567890123456789012 / 12345687901234567890 is evaluated,
the result can be stored in any integral type column, even a TINYINT column, as it is a small value.
In SQL Statements, an integer literal is treated as INTEGER, unless its value does not fit. In this case it is treated as
BIGINT or DECIMAL, depending on the value.
Depending on the types of the operands, the result of the operation is returned in a JDBC ResultSet in any of
the related Java types: Integer, Long or BigDecimal. The ResultSet.getXXXX() methods can be used to
retrieve the values so long as the returned value can be represented by the resulting type. This type is deterministically
based on the query, not on the actual rows returned.
Other Numeric Types
In SQL statements, number literals with a decimal point are treated as DECIMAL unless they are written with an
exponent. Thus 0.2 is considered a DECIMAL value but 0.2E0 is considered a DOUBLE value.
When an approximate numeric type, REAL, FLOAT or DOUBLE (all synonymous) is part of an expression involving
different numeric types, the type of the result is DOUBLE. DECIMAL values can be converted to DOUBLE unless
they are beyond the Double.MIN_VALUE - Double.MAX_VALUE range. For example, A * B, A / B, A + B,
etc. will return a DOUBLE value if either A or B is a DOUBLE.
Otherwise, when no DOUBLE value exists, if a DECIMAL or NUMERIC value is part an expression, the type of the
result is DECIMAL or NUMERIC. Similar to integral values, when the result of an expression is assigned to a table
column, the value has to fit in the target column, otherwise an error is returned. This means a small, 4 digit value of
DECIMAL type can be assigned to a column of SMALLINT or INTEGER, but a value with 15 digits cannot.
When a DECIMAL value is multiplied by a DECIMAL or integral type, the resulting scale is the sum of the scales of the
two terms. When they are divided, the result is a value with a scale (number of digits to the right of the decimal point)
equal to the larger of the scales of the two terms. The precision for both operations is calculated (usually increased)
to allow all possible results.
13
SQL Language
The distinction between DOUBLE and DECIMAL is important when a division takes place. For example, 10.0/8.0
(DECIMAL) equals 1.2 but 10.0E0/8.0E0 (DOUBLE) equals 1.25. Without division operations, DECIMAL
values represent exact arithmetic.
REAL, FLOAT and DOUBLE values are all stored in the database as java.lang.Double objects. Special
values such as NaN and +-Infinity are also stored and supported. These values can be submitted to the database
via JDBC PreparedStatement methods and are returned in ResultSet objects. In order to allow division
by zero of DOUBLE values in SQL statements (which returns NaN or +-Infinity) you should set the property
hsqldb.double_nan as false (SET DATABASE SQL DOUBLE NAN FALSE). The double values can be retrieved
from a ResultSet in the required type so long as they can be represented. For setting the values, when
PreparedStatement.setDouble() or setFloat() is used, the value is treated as a DOUBLE automatically.
In short,
<numeric type> ::= <exact numeric type> | <approximate numeric type>
<exact numeric type> ::= NUMERIC [ <left paren> <precision> [ <comma> <scale> ]
<right paren> ] | { DECIMAL | DEC } [ <left paren> <precision> [ <comma> <scale> ]
<right paren> ] | SMALLINT | INTEGER | INT | BIGINT
<approximate numeric type> ::= FLOAT [ <left paren> <precision> <right paren> ]
| REAL | DOUBLE PRECISION
<precision> ::= <unsigned integer>
<scale> ::= <unsigned integer>
Boolean Type
The BOOLEAN type conforms to the SQL Standard and represents the values TRUE, FALSE and UNKNOWN. This
type of column can be initialised with Java boolean values, or with NULL for the UNKNOWN value.
The three-value logic is sometimes misunderstood. For example, x IN (1, 2, NULL) does not return true if x is NULL.
In previous versions of HyperSQL, BIT was simply an alias for BOOLEAN. In version 2, BIT is a single-bit bit map.
<boolean type> ::= BOOLEAN
The SQL Standard does not support type conversion to BOOLEAN apart from character strings that consists of boolean
literals. Because the BOOLEAN type is relatively new to the Standard, several database products used other types to
represent boolean values. For improved compatibility, HyperSQL allows some type conversions to boolean.
Values of BIT and BIT VARYING types with length 1 can be converted to BOOLEAN. If the bit is set, the result of
conversion is the TRUE value, otherwise it is FALSE.
Values of TINYINT, SMALLINT, INTEGER and BIGINT types can be converted to BOOLEAN. If the value is zero,
the result is the FALSE value, otherwise it is TRUE.
14
SQL Language
The SQL Standard behaviour of the CHARACTER type is a remnant of legacy systems in which character strings are
padded with spaces to fill a fixed width. These spaces are sometimes significant while in other cases they are silently
discarded. It would be best to avoid the CHARACTER type altogether. With the rest of the types, the strings are not
padded when assigned to columns or variables of the given type. The trailing spaces are still considered discardable
for all character types. Therefore if a string with trailing spaces is too long to assign to a column or variable of a
given length, the spaces beyond the type length are discarded and the assignment succeeds (provided all the characters
beyond the type length are spaces).
The VARCHAR and CLOB types have length limits, but the strings are not padded by the system. Note that if you
use a large length for a VARCHAR or CLOB type, no extra space is used in the database. The space used for each
stored item is proportional to its actual length.
If CHARACTER is used without specifying the length, the length defaults to 1. For the CLOB type, the length limit
can be defined in units of kilobyte (K, 1024), megabyte (M, 1024 * 1024) or gigabyte (G, 1024 * 1024 * 1024), using
the <multiplier>. If CLOB is used without specifying the length, the length defaults to 1GB.
<character string type> ::= { CHARACTER | CHAR } [ <left paren> <character
length> <right paren> ] | { CHARACTER VARYING | CHAR VARYING | VARCHAR } <left
paren> <character length> <right paren> | LONGVARCHAR [ <left paren> <character
length> <right paren> ] | <character large object type>
<character large object type> ::= { CHARACTER LARGE OBJECT | CHAR LARGE OBJECT
| CLOB } [ <left paren> <character large object length> <right paren> ]
<character length> ::= <unsigned integer> [ <char length units> ]
<large object length> ::= <length> [ <multiplier> ] | <large object length token>
<character large object length> ::= <large object length> [ <char length units> ]
<large object length token> ::= <digit>... <multiplier>
<multiplier> ::= K | M | G
<char length units> ::= CHARACTERS | OCTETS
Each character type has a collation. This is either a default collation or stated explicitly with the COLLATE clause.
Collations are discussed in the Schemas and Database Objects chapter.
CHAR(10)
CHARACTER(10)
VARCHAR(2)
CHAR VARYING(2)
CLOB(1000)
CLOB(30K)
CHARACTER LARGE OBJECT(1M)
LONGVARCHAR
15
SQL Language
The BINARY type represents a fixed width-string. Each shorter string is padded with zeros to fill the fixed width.
Similar to the CHARACTER type, the trailing zeros in the BINARY string are simply discarded in some operations.
For the same reason, it is best to avoid this particular type and use VARBINARY instead.
When two binary values are compared, if one is of BINARY type, then zero padding is performed to extend the length
of the shorter string to the longer one before comparison. No padding is performed with other binary types. If the bytes
compare equal to the end of the shorter value, then the longer string is considered larger than the shorter string.
If BINARY is used without specifying the length, the length defaults to 1. For the BLOB type, the length limit can
be defined in units of kilobyte (K, 1024), megabyte (M, 1024 * 1024) or gigabyte (G, 1024 * 1024 * 1024), using the
<multiplier>. If BLOB is used without specifying the length, the length defaults to 1GB.
The UUID type represents a UUID string. The type is similar to BINARY(16) but with the extra
enforcement that disallows assigning, casting or compareing with shorter or longer strings. Strings such as
'24ff1824-01e8-4dac-8eb3-3fee32ad2b9c' or '24ff182401e84dac8eb33fee32ad2b9c' are allowed. When a value of the
UUID type is converted to a CHARACTER type, the hyphens are inserted in the required positions.
<binary string type> ::= BINARY [ <left paren> <length> <right paren> ] | { BINARY
VARYING | VARBINARY } <left paren> <length> <right paren> | LONGVARBINARY [ <left
paren> <length> <right paren> ] | UUID | <binary large object string type>
<binary large object string type> ::= { BINARY LARGE OBJECT | BLOB } [ <left
paren> <large object length> <right paren> ]
<length> ::= <unsigned integer>
BINARY(10)
VARBINARY(2)
BINARY VARYING(2)
BLOB(1000)
BLOB(30G)
BINARY LARGE OBJECT(1M)
LONGVARBINARY
16
SQL Language
BIT
BIT(10)
BIT VARYING(2)
Lob Data
BLOB and CLOB are lob types. These types are used for very long strings that do not necessarily fit in memory. Small
lobs that fit in memory can be accessed just like BINARY or VARCHAR column data. But lobs are usually much
larger and therefore accessed with special JDBC methods.
To insert a lob into a table, or to update a column of lob type with a new lob, you can use the setBinaryStream()
and setCharacterStream() methods of JDBC java.sql.PreparedStatement. These are very efficient
methods for long lobs. Other methods are also supported. If the data for the BLOB or CLOB is already a memory object,
you can use the setBytes() or setString() methods, which are efficient for memory data. Another method
is to obtain a lob with the getBlob() and getClob() methods of java.sql.Connection, populate its data,
then use the setBlob() or setClob() methods of PreparedStatement. Yet another method allows to create
instances of org.hsqldb.jdbc.JDBCBlobFile and org.hsqldb.jdbc.JDBCClobFile and construct a
large lob for use with setBlob() and setClob() methods.
A lob is retrieved from a ResultSet with the getBlob() or getClob() method. The steaming methods of the lob
objects are then used to access the data. HyperSQL also allows efficient access to chunks of lobs with getBytes()
or getString() methods. Furthermore, parts of a BLOB or CLOB already stored in a table can be modified.
An updatable ResultSet is used to select the row from the table. The getBlob() or getClob() methods of
ResultSet are used to access the lob as a java.sql.Blob or java.sql.Clob object. The setBytes()
and setString() methods of these objects can be used to modify the lob. Finally the updateRow() method of
the ResultSet is used to update the lob in the row. Note these modifications are not allowed with compressed or
encrypted lobs.
Lobs are logically stored in columns of tables. Their physical storage is a separate *.lobs file. This file is created as
soon as a BLOB or CLOB is inserted into the database. The file will grow as new lobs are inserted into the database.
In version 2, the *.lobs file is never deleted even if all lobs are deleted from the database. In this case you can delete
the *.lobs file after a SHUTDOWN. When a CHECKPOINT happens, the space used for deleted lobs is freed and
is reused for future lobs. By default, clobs are stored without compression. You can use a database setting to enable
compression of clobs. This can significantly reduce the storage size of clobs.
17
SQL Language
Java Objects can simply be stored internally and no operations can be performed on them other than assignment
between columns of type OTHER or checking for NULL. Tests such as WHERE object1 = object2 do not
mean what you might expect, as any non-null object would satisfy such a tests. But WHERE object1 IS NOT
NULL is perfectly acceptable.
The engine does not allow normal column values to be assigned to Java Object columns (for example, assigning an
INTEGER or STRING to such a column with an SQL statement such as UPDATE mytable SET objectcol
= intcol WHERE ...).
<java object type> ::= OTHER
The default method of storage is used when the objects and their state needs to be saved and retrieved in the future.
This method is also used when memory resources are limited and collections of objects are stored and retrieved only
when needed.
The Live Object option uses the database table as a collection of objects. This allows storing some attributes of the
objects in the same table alongside the object itself and fast search and retrieval of objects on their attributes. For
example when many thousands of live objects contain details of films, The film title and the director can be stored in
the table and searches can be performed for films on these attributes:
CREATE TABLE movies (director VARCHAR(30), title VARCHAR(40), obj OTHER)
SELECT obj FROM movies WHERE director LIKE 'Luc%'
In any case, at least one attribute of the object should be stored to allow efficient retrieval of the objects from both
Live Object and Serialized storage. Often an id number is used a the attribute.
18
SQL Language
Explicit CAST of a value to a CHARACTER or VARCHAR type will result in forced truncation or padding. So a test
such as CAST (mycol AS VARCHAR(2)) = 'xy' will find the values beginning with 'xy'. This is the equivalent
of SUBSTRING(mycol FROM 1 FOR 2)= 'xy'.
For all numeric types, the rules of explicit cast and implicit conversion are the same. If cast or conversion causes any
digits to be lost from the fractional part, it can take place. If the non-fractional part of the value cannot be represented
in the new type, cast or conversion cannot take place and will result in a data exception.
There are special rules for DATE, TIME, TIMESTAMP and INTERVAL casts and conversions.
Datetime types
HSQLDB fully supports datetime and interval types and operations, including all relevant optional features, as
specified by the SQL Standard since SQL-92. The two groups of types are complementary.
The DATE type represents a calendar date with YEAR, MONTH and DAY fields.
The TIME type represents time of day with HOUR, MINUTE and SECOND fields, plus an optional SECOND
FRACTION field.
The TIMESTAMP type represents the combination of DATE and TIME types.
TIME and TIMESTAMP types can include WITH TIME ZONE or WITHOUT TIME ZONE (the default) qualifiers.
They can have fractional second parts. For example, TIME(6) has six fractional digits for the second field.
If fractional second precision is not specified, it defaults to 0 for TIME and to 6 for TIMESTAMP.
<datetime type> ::= DATE | TIME [ <left paren> <time precision> <right paren> ]
[ <with or without time zone> ] | TIMESTAMP [ <left paren> <timestamp precision>
<right paren> ] [ <with or without time zone> ]
<with or without time zone> ::= WITH TIME ZONE | WITHOUT TIME ZONE
<time precision> ::= <time fractional seconds precision>
<timestamp precision> ::= <time fractional seconds precision>
<time fractional seconds precision> ::= <unsigned integer>
DATE
TIME(6)
TIMESTAMP(2) WITH TIME ZONE
Examples of the string literals used to represent date time values, some with time zone, some without, are below:
DATE '2008-08-22'
TIMESTAMP '2008-08-08 20:08:08'
TIMESTAMP '2008-08-08 20:08:08+8:00' /* Beijing */
TIME '20:08:08.034900'
TIME '20:08:08.034900-8:00' /* US Pacific */
Time Zone
DATE values do not take time zones. For example United Nations designates 5 June as World Environment Day,
which was observed on DATE '2008-06-05' in different time zones.
TIME and TIMESTAMP values without time zone, usually have a context that indicates some local time zone. For
example, a database for college course timetables usually stores class dates and times without time zones. This works
19
SQL Language
because the location of the college is fixed and the time zone displacement is the same for all the values. Even when the
events take place in different time zones, for example international flight times, it is possible to store all the datetime
information as references to a single time zone, usually GMT. For some databases it may be useful to store the time
zone displacement together with each datetime value. SQLs TIME WITH TIME ZONE and TIMESTAMP WITH
TIME ZONE values include a time zone displacement value.
The time zone displacement is of the type INTERVAL HOUR TO MINUTE. This data type is described in the next
section. The legal values are between '14:00' and '+14:00'.
Operations on Datetime Types
The expression <datetime expression> AT TIME ZONE <time displacement> evaluates to a datetime
value representing exactly the same point of time in the specified <time displacement>. The expression, AT
LOCAL is equivalent to AT TIME ZONE <local time displacement>. If AT TIME ZONE is used with
a datetime operand of type WITHOUT TIME ZONE, the operand is first converted to a value of type WITH TIME
ZONE at the sessions time displacement, then the specified time zone displacement is set for the value. Therefore, in
these cases, the final value depends on the time zone of the session in which the statement was used.
AT TIME ZONE, modifies the field values of the datetime operand. This is done by the following procedure:
1. determine the corresponding datetime at UTC.
2. find the datetime value at the given time zone that corresponds with the UTC value from step 1.
Example a:
TIME '12:00:00' AT TIME ZONE INTERVAL '1:00' HOUR TO MINUTE
If the sessions time zone displacement is -'8:00', then in step 1, TIME '12:00:00' is converted to UTC, which is TIME
'20:00:00+0:00'. In step 2, this value is expressed as TIME '21:00:00+1:00'.
Example b:
TIME '12:00:00-5:00' AT TIME ZONE INTERVAL '1:00' HOUR TO MINUTE
Because the operand has a time zone, the result is independent of the session time zone displacement. Step 1 results
in TIME '17:00:00+0:00', and step 2 results in TIME '18:00:00+1:00'
Note that the operand is not limited to datetime literals used in these examples. Any valid expression that evaluates
to a datetime value can be the operand.
Type Conversion
CAST is used to for all other conversions. Examples:
CAST (<value> AS TIME WITHOUT TIME ZONE)
CAST (<value> AS TIME WITH TIME ZONE)
In the first example, if <value> has a time zone component, it is simply dropped. For example TIME '12:00:00-5:00'
is converted to TIME '12:00:00'
In the second example, if <value> has no time zone component, the current time zone displacement of the session is
added. For example TIME '12:00:00' is converted to TIME '12:00:00-8:00' when the session time zone displacement
is '-8:00'.
Conversion between DATE and TIMESTAMP is performed by removing the TIME component of a TIMESTAMP
value or by setting the hour, minute and second fields to zero. TIMESTAMP '2008-08-08 20:08:08+8:00' becomes
DATE '2008-08-08', while DATE '2008-08-22' becomes TIMESTAMP '2008-08-22 00:00:00'.
20
SQL Language
Conversion between TIME and TIMESTAMP is performed by removing the DATE field values of a TIMESTAMP
value or by appending the fields of the TIME value to the fields of the current session date value.
Assignment
When a value is assigned to a datetime target, e.g., a value is used to update a row of a table, the type of the value must
be the same as the target, but the WITH TIME ZONE or WITHOUT TIME ZONE characteristics can be different. If
the types are not the same, an explicit CAST must be used to convert the value into the target type.
Comparison
When values WITH TIME ZONE are compared, they are converted to UTC values before comparison. If a value
WITH TIME ZONE is compared to another WITHOUT TIME ZONE, then the WITH TIME ZONE value is converted
to AT LOCAL, then converted to WITHOUT TIME ZONE before comparison.
It is not recommended to design applications that rely on comparisons and conversions between TIME values WITH
TIME ZONE. The conversions may involve normalisation of the time value, resulting in unexpected results. For
example, the expression: BETWEEN(TIME '12:00:00-8:00', TIME '22:00:00-8:00') is converted to BETWEEN(TIME
'20:00:00+0:00', TIME '06:00:00+0:00') when it is evaluated in the UTC zone, which is always FALSE.
Functions
Several functions return the current session timestamp in different datetime types:
CURRENT_DATE
DATE
CURRENT_TIME
CURRENT_TIMESTAMP
LOCALTIME
LOCALTIMESTAMP
HyperSQL supports a very extensive range of functions for conversion, extraction and manipulation of DATE and
TIMESTAMP values. See the Built In Functions chapter.
Session Time Zone Displacement
When an SQL session is started (with a JDBC connection) the local time zone of the client JVM (including any seasonal
time adjustments such as daylight saving time) is used as the session time zone displacement. Note that the SQL session
time displacement is not changed when a seasonal time adjustment takes place while the session is open. To change
the SQL session time zone displacement use the following commands:
SET TIME ZONE <time displacement>
SET TIME ZONE LOCAL
The first command sets the displacement to the given value. The second command restores the original, real time zone
displacement of the session.
Datetime Values and Java
When datetime values are sent to the database using the PreparedStatement or CallableStatement
interfaces, the Java object is converted to the type of the prepared or callable statement parameter. This type may
be DATE, TIME, or TIMESTAMP (with or without time zone). The time zone displacement is the time zone of the
JDBC session.
When datetime values are retrieved from the database using the ResultSet interface, there are two representations.
The getString() methods of the ResultSet interface, return an exact representation of the value in the SQL
21
SQL Language
type as it is stored in the database. This includes the correct number of digits for the fractional second field, and for
values with time zone displacement, the time zone displacement. Therefore if TIME '12:00:00' is stored in the database,
all users in different time zones will get '12:00:00' when they retrieve the value as a string. The getTime() and
getTimestamp() methods of the ResultSet interface return Java objects that are corrected for the session
time zone. The UTC millisecond value contained the java.sql.Time or java.sql.Timestamp objects will
be adjusted to the time zone of the session, therefore the toString() method of these objects return the same values
in different time zones.
If you want to store and retrieve UTC values that are independent of any session's time zone, you can use a
TIMESTAMP WITH TIME ZONE column. The setTime(...) and setTimestamp(...) methods of the PreparedStatement
interface which have a Calendar parameter can be used to assign the values. The time zone of the given Calendar
argument is used as the time zone. Conversely, the getTime(...) and getTimestamp(...) methods of the ResultSet
interface which have a Calendar parameter can be used with a Calendar argument to retrieve the values.
JDBC has an unfortunate limitation and does not include type codes for SQL datetime types that have a TIME
ZONE property. Therefore, for compatibility with database tools that are limited to the JDBC type codes,
HyperSQL reports these types by default as datetime types without TIME ZONE. You can use the URL property
hsqldb.translate_dti_types=false to override the default behaviour.
Non-Standard Extensions
HyperSQL version 2.3.0 supports some extenstions to the SQL standard treatment of datetime and interval types. For
example, the Standard expression to add a number of days to a date has an explicit INTERVAL value but HSQLDB
also allows an integer to be used without specifying DAY. Examples of some Standard expressions and their nonstandard alternatives are given below:
-- standard forms
CURRENT_DATE + '2' DAY
SELECT (LOCALTIMESTAMP - atimestampcolumn) DAY TO SECOND FROM atable
-- non-standard forms
CURRENT_DATE + 2
SELECT LOCALTIMESTAMP - atimestampcolumn FROM atable
It is recommended to use the SQL Standard syntax as it is more precise and avoids ambiguity.
Interval Types
Interval types are used to represent differences between date time values. The difference between two date time values
can be measured in seconds or in months. For measurements in months, the units YEAR and MONTH are available,
while for measurements in seconds, the units DAY, HOUR, MINUTE, SECOND are available. The units can be used
individually, or as a range. An interval type can specify the precision of the most significant field and the second fraction
digits of the SECOND field (if it has a SECOND field). The default precision is 2. The default second precision is 0.
<interval type> ::= INTERVAL <interval qualifier>
<interval qualifier> ::= <start field> TO <end field> | <single datetime field>
<start field> ::= <non-second primary datetime field> [ <left paren> <interval
leading field precision> <right paren> ]
<end field> ::= <non-second primary datetime field> | SECOND [ <left paren>
<interval fractional seconds precision> <right paren> ]
<single datetime field> ::= <non-second primary datetime field> [ <left paren>
<interval leading field precision> <right paren> ] | SECOND [ <left paren>
22
SQL Language
YEAR TO MONTH
YEAR(3)
DAY(4) TO HOUR
MINUTE(4) TO SECOND(6)
SECOND(4,6)
The word INTERVAL indicates the general type name. The rest of the definition is called an <interval
qualifier>. This designation is important, as in most expressions <interval qualifier> is used without
the word INTERVAL.
Interval Values
An interval value can be negative, positive or zero. An interval type has all the datetime fields in the specified range.
These fields are similar to those in the TIMESTAMP type. The differences are as follows:
The first field of an interval value can hold any numeric value up to the specified precision. For example, the hour
field in HOUR(2) TO SECOND can hold values above 23 (up to 99). The year and month fields can hold zero (unlike
a TIMESTAMP value) and the maximum value of a month field that is not the most significant field, is 11.
The standard function ABS(<interval value expression>) can be used to convert a negative interval value
to a positive one.
The literal representation of interval values consists of the type definition, with a string representing the interval value
inserted after the word INTERVAL. Some examples of interval literal below:
INTERVAL
INTERVAL
INTERVAL
value is
INTERVAL
Interval values of the types that are based on seconds can be cast into one another. Similarly those that are based on
months can be cast into one another. It is not possible to cast or convert a value based on seconds to one based on
months, or vice versa.
When a cast is performed to a type with a smaller least-significant field, nothing is lost from the interval value.
Otherwise, the values for the missing least-significant fields are discarded. Examples:
CAST ( INTERVAL '145 23:12:19' DAY TO SECOND AS INTERVAL DAY TO HOUR ) = INTERVAL '145 23' DAY
TO HOUR
CAST(INTERVAL '145 23' DAY TO HOUR AS INTERVAL DAY TO SECOND) = INTERVAL '145 23:00:00' DAY TO
SECOND
A numeric value can be cast to an interval type. In this case the numeric value is first converted to a single-field
INTERVAL type with the same field as the least significant field of the target interval type. This value is then converted
to the target interval type For example CAST( 22 AS INTERVAL YEAR TO MONTH) evaluates to INTERVAL '22'
23
SQL Language
MONTH and then INTERVAL '1 10' YEAR TO MONTH. Note that SQL Standard only supports casts to single-field
INTERVAL types, while HyperSQL allows casting to multi-field types as well.
An interval value can be cast to a numeric type. In this case the interval value is first converted to a single-field
INTERVAL type with the same field as the least significant filed of the interval value. The value is then converted
to the target type. For example CAST (INTERVAL '1-11' YEAR TO MONTH AS INT) evaluates to INTERVAL
'23' MONTH, and then 23.
An interval value can be cast into a character type, which results in an INTERVAL literal. A character value can be
cast into an INTERVAL type so long as it is a string with a format compatible with an INTERVAL literal.
Two interval values can be added or subtracted so long as the types of both are based on the same field, i.e., both are
based on MONTH or SECOND. The values are both converted to a single-field interval type with same field as the
least-significant field between the two types. After addition or subtraction, the result is converted to an interval type
that contains all the fields of the two original types.
An interval value can be multiplied or divided by a numeric value. Again, the value is converted to a numeric, which
is then multiplied or divided, before converting back to the original interval type.
An interval value is negated by simply prefixing with the minus sign.
Interval values used in expressions are either typed values, including interval literals, or are interval casts. The
expression: <expression> <interval qualifier> is a cast of the result of the <expression> into the
INTERVAL type specified by the <interval qualifier>. The cast can be formed by adding the
keywords and parentheses as follows: CAST ( <expression> AS INTERVAL <interval
qualifier> ).
The examples below feature different forms of expression that represent an
interval value, which is then added to the given date literal.
DATE '2000-01-01'
DATE '2000-01-01'
MONTH */
DATE '2000-01-01'
*/
DATE '2000-01-01'
DATE '2000-01-01'
DATE '2000-01-01'
24
SQL Language
The two datetime expressions are enclosed in parentheses, followed by the <interval qualifier> fields. In
the first example below, COL1 and COL2 are of the same datetime type, and the result is evaluated in INTERVAL
YEAR TO MONTH type.
(COL1 COL2) YEAR TO MONTH /* the difference between two DATE or two TIEMSTAMP values in years
and months */
(CURRENT_DATE COL3) DAY /* the number of days between the value of COL3 and the current date
*/
(CURRENT_DATE - DATE '2000-01-01') YEAR TO MONTH /* the number of years and months since the
beginning of this century */
CURRENT_DATE - 2 DAY /* the date of the day before yesterday */
(CURRENT_TIMESTAMP - TIMESTAMP '2009-01-01 00:00:00') DAY(4) TO SECOND(2) /* days to seconds
since the given date */
The individual fields of both datetime and interval values can be extracted using the EXTRACT function. The same
function can also be used to extract the time zone displacement fields of a datetime value.
EXTRACT ({YEAR | MONTH | DAY | HOUR | MINUTE | SECOND | TIMEZONE_HOUR |
TIMEZONE_MINUTE | DAY_OF_WEEK | WEEK_OF_YEAR } FROM {<datetime value> | <interval
value>})
The dichotomy between interval types based on seconds, and those based on months, stems from the fact that the
different calendar months have different numbers of days. For example, the expression, nine months and nine days
since an event is not exact when the date of the event is unknown. It can represent a period of around 284 days give
or take one. SQL interval values are independent of any start or end dates or times. However, when they are added to
or subtracted from certain date or timestamp values, the result may be invalid and cause an exception (e.g. adding one
month to January 30 results in February 30, which is invalid).
JDBC has an unfortunate limitation and does not include type codes for SQL INTERVAL types. Therefore, for
compatibility with database tools that are limited to the JDBC type codes, HyperSQL reports these types by default as
VARCHAR. You can use the URL property hsqldb.translate_dti_types=false to override the default
behaviour.
Arrays
Array are a powerful feature of SQL:2008 and can help solve many common problems. Arrays should not be used
as a substitute for tables.
HyperSQL supports arrays of values according to the SQL:2008 Standard.
Elements of the array are either NULL, or of the same data type. It is possible to define arrays of all supported types,
including the types covered in this chapter and user defined types, except LOB types. An SQL array is one dimensional
and is addressed from position 1. An empty array can also be used, which has no element.
Arrays can be stored in the database, as well as being used as temporary containers of values for simplifying SQL
statements. They facilitate data exchange between the SQL engine and the user's application.
The full range of supported syntax allows array to be created, used in SELECT or other statements, combined with
rows of tables and used in routine calls.
Array Definition
The type of a table column, a routine parameter, a variable, or the return value of a function can be defined as an array.
<array type> ::= <data type> ARRAY [ <left bracket or trigraph> <maximum
cardinality> <right bracket or trigraph> ]
25
SQL Language
The word ARRAY is added to any valid type definition except BLOB and CLOB type definitions. If the optional
<maximum cardinality> is not used, the default value is 1024. The size of the array cannot be extended beyond
maximum cardinality.
In the example below, the table contains a column of integer arrays and a column of varchar arrays. The VARCHAR
array has an explicit maximum size of 10, which means each array can have between 0 and 10 elements. The INTEGER
array has the default maximum size of 1024. The scores column has a default clause with an empty array. The default
clause can be defined only as DEFAULT NULL or DEFAULT ARRAY[] and does not allow arrays containing
elements.
CREATE TABLE t (id INT PRIMARY KEY, scores INT ARRAY DEFAULT ARRAY[], names VARCHAR(20)
ARRAY[10])
[ 1, 2, 3 ]
[ 'HOT', 'COLD' ]
[ var1, var2, CURRENT_DATE ]
(SELECT lastname FROM namestable ORDER BY id)
Inserting and updating a table with an ARRAY column can use array constructors, not only for updated column values,
but also in equality search conditions:
INSERT INTO t VALUES 10, ARRAY[1,2,3], ARRAY['HOT', 'COLD']
UPDATE t SET names = ARRAY['LARGE', 'SMALL'] WHERE id = 12
UPDATE t SET names = ARRAY['LARGE', 'SMALL'] WHERE id < 12 AND scores = ARRAY[3,4]
When using a PreparedStatement with an ARRAY parameter, an object of the type java.sql.Array must be used to set
the parameter. The org.hsqldb.jdbc.JDBCArrayBasic class can be used for constructing a java.sql.Array
object in the user's application. Code fragment below:
String sql = "UPDATE t SET names = ? WHERE id = ?";
PreparedStatement ps = connection.prepareStatement(sql)
Object[] data = new Object[]{"one", "two"};
// default types defined in org.hsqldb.types.Type can be used
org.hsqldb.types.Type type = org.hsqldb.types.Type.SQL_VARCHAR_DEFAULT;
JDBCArrayBasic array = new JDBCArrayBasic(data, type);
ps.setArray(1, array);
ps.setInt(2, 1000);
ps.executeUpdate();
Trigraph
A trigraph is a substitute for <left bracket> and <right bracket>.
<left bracket trigraph> ::= ??(
26
SQL Language
Array Reference
The most common operations on an array are element reference and assignment, which are used when reading or
writing an element of the array. Unlike Java and many other languages, arrays are extended if an element is assigned
to an index beyond the current length. This can result in gaps containing NULL elements. Array length cannot exceed
the maximum cardinality.
Elements of all arrays, including those that are the result of function calls or other operations can be referenced for
reading.
<array element reference> ::= <array value expression> <left bracket> <numeric
value expression> <right bracket>
Elements of arrays that are table columns or routine variables can be referenced for writing. This is done in a SET
statement, either inside an UPDATE statement, or as a separate statement in the case of routine variables, OUT and
INOUT parameters.
<target array element specification> ::= <target array reference> <left bracket
or trigraph> <simple value specification> <right bracket or trigraph>
<target array reference> ::= <SQL parameter reference> | <column reference>
Note that only simple values or variables are allowed for the array index when an assignment is performed. The
examples below demonstrates how elements of the array are referenced in SELECT and an UPDATE statement.
SELECT scores[ranking], names[ranking] FROM t JOIN t1 on (t.id = t1.tid)
UPDATE t SET scores[2] = 123, names[2] = 'Reds' WHERE id = 10
SELECT scores[ranking], names[ranking] FROM t JOIN t1 on (t.id = t1.tid)
UPDATE t SET scores[2] = 123, names[2] = 'Reds' WHERE id = 10
Array Operations
Several SQL operations and functions can be used with arrays.
CONCATENATION
Array concatenation is performed similar to string concatenation. All elements of the array on the right are appended
to the array on left.
<array concatenation> ::= <array value expression 1> <concatenation operator>
<array value expression 2>
<concatenation operator> ::= ||
FUNCTIONS
Seven functions operate on arrays. Details are described in the Built In Functions chapter.
27
SQL Language
ARRAY_AGG is an aggregate function and produces an array containing values from differnt rows of a SELECT
statement. Details are described in the Data Access and Change chapter.
SEQUENCE_ARRAY creates an array with sequential elements.
CARDINALITY <left paren> <array value expression> <right paren>
MAX_CARDINALITY <left paren> <array value expression> <right paren>
Array cardinality and max cardinality are functions that return an integer. CARDINALITY returns the element count,
while MAX_CARDINALITY returns the maximum declared cardinality of an array.
POSITION_ARRAY <left paren> <value expression> IN <array value expression> [FROM
<numeric value expression>] <right paren>
The POSITION_ARRAY function returns the position of the first match for the <value expression> from the start or
from the given start position when <numeric value expression> is used.
TRIM_ARRAY <left paren> <array
expression> <right paren>
value
expression>
<comma>
<numeric
value
The TRIM_ARRAY function returns a copy of an array with the specified number of elements removed from the end
of the array. The <array value expression> can be any expression that evaluates to an array.
SORT_ARRAY <left paren> <array value expression> [ { ASC | DESC } ] [ NULLS
{ FIRST | LAST } ] <right paren>
The SORT_ARRAY function returns a sorted copy of an array. NULL elements appear at the beginning of the new
array. You can change the sort direction or the position of NULL elements with the option keywords.
CAST
An array can be cast into an array of a different type. Each element of the array is cast into the element type of the
target array type.
UNNEST
Arrays can be converted into table references with the UNNEST keyword.
UNNEST(<array value expression>) [ WITH ORDINALITY ]
The <array value expression> can be any expression that evaluates to an array. A table is returned that
contains one column when WITH ORDINALITY is not used, or two columns when WITH ORDINALITY is used.
The first column contains the elements of the array (including all the nulls). When the table has two columns, the
second column contains the ordinal position of the element in the array. When UNNEST is used in the FROM clause
of a query, it implies the LATERAL keyword, which means the array that is converted to table can belong to any table
that precedes the UNNEST in the FROM clause. This is explained in the Data Access and Change chapter.
INLINE CONSTRUCTOR
Array constructors can be used in SELECT and other statements. For example, an array constructor with a subquery
can return the values from several rows as one array.
The example below shows an ARRAY constructor with a correlated subquery to return the list of order values for each
customer. The CUSTOMER table that is included for tests in the DatabaseManager GUI app is the source of the data.
SELECT FIRSTNAME, LASTNAME, ARRAY(SELECT INVOICE.TOTAL FROM INVOICE WHERE CUSTOMERID =
CUSTOMER.ID) AS ORDERS FROM CUSTOMER
28
SQL Language
FIRSTNAME
--------Laura
Robert
Robert
Michael
LASTNAME
--------Steel
King
Sommer
Smith
ORDERS
-------------------------------------ARRAY[2700.90,4235.70]
ARRAY[4761.60]
ARRAY[]
ARRAY[3420.30]
COMPARISON
Arrays can be compared for equality, but they cannot be compared for ordering values or range comparison. Array
expressions are therefore not allowed in an ORDER BY clause, or in a comparison expression such as GREATER
THAN. It is possible to define a UNIQUE constraint on a column of ARRAY type. Two arrays are equal if they have
the same length and the values at each index position are either equal or both NULL.
USER DEFINED FUNCTIONS and PROCEDURES
Array parameters, variables and return values can be specified in user defined functions and procedures, including
aggregate functions. An aggregate function can return an array that contains all the scalar values that have been
aggregated. These capabilities allow a wider range of applications to be covered by user defined functions and easier
data exchange between the engine and the user's application.
29
SQL Language
t2.c2, this is reduced to 10,000 row checks and index lookups. With the additional index on t2.c2, only about 4 rows
are checked to get the first result row.
Note that in HSQLDB an index on multiple columns can be used internally as a non-unique index on the first column
in the list. For example: CONSTRAINT name1 UNIQUE (c1, c2, c3); means there is the equivalent of
CREATE INDEX name2 ON atable(c1);. So you do not need to specify an extra index if you require one
on the first column of the list.
In HyperSQL 2, a multi-column index will speed up queries that contain joins or values on the first n columns of the
index. You need NOT declare additional individual indexes on those columns unless you use queries that search only
on a subset of the columns. For example, rows of a table that has a PRIMARY KEY or UNIQUE constraint on three
columns or simply an ordinary index on those columns can be found efficiently when values for all three columns, or
the first two columns, or the first column, are specified in the WHERE clause. For example, SELECT ... FROM
t1 WHERE t1.c1 = 4 AND t1.c2 = 6 AND t1.c3 = 8 will use an index on t1(c1,c2,c3) if it exists.
A multi-column index will not speed up queries on the second or third column only. The first column must be specified
in the JOIN .. ON or WHERE conditions.
Sometimes query speed depends on the order of the tables in the JOIN .. ON or FROM clauses. For example the second
query below should be faster with large tables (provided there is an index on TB.COL3). The reason is that TB.COL3
can be evaluated very quickly if it applies to the first table (and there is an index on TB.COL3):
-- TB is a very large table with only a few rows where TB.COL3 = 4
SELECT * FROM TA JOIN TB ON TA.COL1 = TB.COL2 AND TB.COL3 = 4;
SELECT * FROM TB JOIN TA ON TA.COL1 = TB.COL2 AND TB.COL3 = 4;
The general rule is to put first the table that has a narrowing condition on one of its columns. In certain cases, HyperSQL
2.2.x reorders the joined tables if it is obvious that this will introduce a narrowing condition.
HyperSQL features automatic, on-the-fly indexes for views and subselects that are used in a query.
Indexes are used when a LIKE condition searches from the start of the string.
Indexes are used for ORDER BY clauses if the same index is used for selection and ordering of rows. It is possible
to force the use of index for ORDER BY.
30
SQL Language
first name and last name, and both columns are indexed, a query such as the following example will use the indexes
and complete in a short time:
-- TC is a very large table
SELECT * FROM TC WHERE TC.FIRSTNAME = 'John' OR TC.LASTNAME = 'Smith' OR TC.LASTNAME =
'Williams'
Each subquery is considered a separate SELECT statement and uses indexes when they are available.
In each SELECT statement, at least one index per table can be used if there is a query conditions that can use the
index. When conditions on a table are combined with the OR operator, and each condition can use an index, multiple
indexes per table are used.
HyperSQL can use an index for simple queries containing DISTINCT or GROUP BY to avoid checking all the rows
of the table. Note that indexes are always used if the query has a condition, regardless of the use of DISTINCT or
GROUP BY. This particular optimisation applies to cases in which all the columns in the SELECT list are from the
same table and are covered by a single index, and any join or query condition uses this index.
For example, with the large table below, a DISTINCT or GROUP BY query to return all the last names, can use an the
index on the TC.LASTNAME column. Similarly, a GROUP BY query on two columns can use an index that covers
the two columns.
-- TC is a very large table
SELECT DISTINCT LASTNAME FROM TC WHERE TC.LASTNAME > 'F'
SELECT STATE, LASTNAME FROM TC GROUP BY STATE, LASTNAME
31
SQL Language
SELECT * FROM TA JOIN TB ON TA.COL2 = TB.COL1 WHERE TA.COL3 > 40000 ORDER BY TA.COL3 LIMIT 1000;
SELECT * FROM TA JOIN TB ON TA.COL2 = TB.COL1 WHERE TA.COL3 > 40000 AND TA.COL3 < 100000 ORDER
BY TA.COL3 DESC LIMIT 1000;
But if the query contains an equality condition on another indexed column in the table, this may take precedence and
no index may be used for ORDER BY. In this case USING INDEX can be added to the end of the query to force the
use of the index for the LIMIT operation. In the example below there is an index on TA.COL1 as well as the index on
TA.COL3. Normally the index on TA.COL1 is used, but the USING INDEX hint results in the index on TB.COL3
to be used for selecting the first 1000 rows.
-- TA is a very large table with an index on TA.COL3 and a separate index on TA.COL1
SELECT * FROM TA JOIN TB ON TA.COL2 = TB.COL1 WHERE TA.COL1 = 'SENT' AND TB.COL3 > 40000 ORDER
BY TB.COL3 LIMIT 1000 USING INDEX;
32
Overview
All SQL statements are executed in sessions. When a connection is established to the database, a session is started.
The authorization of the session is the name of the user that started the session. A session has several properties. These
properties are set by default at the start according to database settings.
SQL Statements are generally transactional statements. When a transactional statement is executed, it starts a
transaction if no transaction is in progress. If SQL Data (data stored in tables) is modified during a transaction, the
change can be undone with a ROLLBACK statement. When a COMMIT or ROLLBACK statement is executed, the
transaction is ended. Each SQL statement works atomically: it either succeeds or fails without changing any data. If
a single statement fails, an error is raised but the transaction is not normally terminated. However, some failures are
caused by execution of statements that are in conflict with statements executed in other concurrent sessions. Such
failures result in an implicit ROLLBACK, in addition to the exception that is raised.
Schema definition and manipulation statements are also transactional according to the SQL Standard. HyperSQL 2.3
performs automatic commits before and after the execution of such transactions. Therefore, schema-related statements
cannot be rolled back. This is likely to change in future versions.
Some statements are not transactional. Most of these statements are used to change the properties of the session. These
statements begin with the SET keyword.
If the AUTOCOMMIT property of a session is TRUE, then each transactional statement is followed by an implicit
COMMIT.
The default isolation level for a session is READ COMMITTED. This can be changed using the JDBC
java.sql.Connection object and its setTransactionIsolation(int level) method. The session
can be put in read-only mode using the setReadOnly(boolean readOnly) method. Both methods can be
invoked only after a commit or a rollback, but not during a transaction.
The isolation level and / or the readonly mode of a transaction can also be modified using an SQL statement. You can
use the statement to change only the isolation mode, only the read-only mode, or both at the same time. This statement
can be issued only before a transaction starts or after a commit or rollback.
SET
TRANSACTION
characteristic> ]
<transaction
characteristic>
<comma>
33
<transaction
Session Attributes
The system attributes reflect the current mode of operation for the session. These attributes can be accessed with
function calls and can be referenced in queries. For example, they can be returned using the VALUES <attribute
function>, ... statement.
The named attributes such as CURRENT_USER, CURRENT_SCHEMA, etc. are SQL Standard functions. Other
attributes of the session, such as auto-commit or read-only modes can be read using other built-in functions. All these
functions are listed in the Built In Functions chapter.
Session Variables
Session variables are user-defined variables created the same way as the variables for stored procedures and functions.
Currently, these variables cannot be used in general SQL statements. They can be assigned to IN, INOUT and OUT
parameters of stored procedures. This allows calling stored procedures which have INOUT or OUT arguments and
is useful for development and debugging. See the example in the SQL-Invoked Routines chapter, under Formal
Parameters.
Session Tables
With necessary access privileges, sessions can access all table, including GLOBAL TEMPORARY tables, that are
defined in schemas. Although GLOBAL TEMPORARY tables have a single name and definition which applies to all
sessions that use them, the contents of the tables are different for each session. The contents are cleared either at the
end of each transaction or when the session is closed.
Session tables are different because their definition is visible only within the session that defines a table. The definition
is dropped when the session is closed. Session tables do not belong to schemas.
<temporary table declaration> ::= DECLARE LOCAL TEMPORARY TABLE <table name>
<table element list> [ ON COMMIT { PRESERVE | DELETE } ROWS ]
The syntax for declaration is based on the SQL Standard. A session table cannot have FOREIGN KEY constraints,
but it can have PRIMARY KEY, UNIQUE or CHECK constraints. A session table definition cannot be modified by
adding or removing columns, indexes, etc.
It is possible to refer to a session table using its name, which takes precedence over a schema table of the same name.
To distinguish a session table from schema tables, the pseudo schema names, MODULE or SESSION can be used.
An example is given below:
34
Session tables can be created inside a transaction. Automatic indexes are created and used on session tables when
necessary for a query or other statement. By default, session table data is held in memory. This can be changed with
the SET SESSION RESULT MEMORY ROWS statement.
35
MVCC
In the MVCC model, there are no shared, read locks. Exclusive locks are used on individual rows, but their use
is different. Transactions can read and modify the same table simultaneously, generally without waiting for other
transactions. The SQL Standard isolation levels are used by the user's application, but these isolation levels are
translated to the MVCC isolation levels READ CONSISTENCY or SNAPSHOT ISOLATION.
When transactions are running at READ COMMITTED level, no conflict will normally occur. If a transaction that
runs at this level wants to modify a row that has been modified by another uncommitted transaction, then the engine
36
puts the transaction in wait, until the other transaction has committed. The transaction then continues automatically.
This isolation level is called READ CONSISTENCY.
Deadlock is completely avoided by the engine. The database setting, SET DATABASE TRANSACTION
ROLLBACK ON CONFLICT, determines what happens in case of deadlock. In theory, conflict (deadlock) is possible
if each transaction is waiting for a different row modified by the other transaction. In this case, one of the transactions
is immediately terminated by rolling back all the previous statements in the transaction in order to allow the other
transaction to continue. If the setting has been changed to FALSE with the <set database transaction
rollback on conflict statement>, the session that avoided executing the deadlock-causing statement
returns an error, but without rolling back the previous statements in the current transaction. This session should perform
an alternative statement to continue and commit or roll back the transaction. Once the session has committed or rolled
back, the other session can continue. This allows maximum flexibility and compatibility with other database engines
which do not roll back the transaction upon deadlock.
When transactions are running in REPEATABLE READ or SERIALIZABLE isolation levels, conflict is more likely
to happen. There is no difference in operation between these two isolation levels. This isolation level is called
SNAPSHOT ISOLATION.
In this mode, when the duration of two transactions overlaps, if one of the transactions has modified a row and the
second transaction wants to modify the same row, the action of the second transaction will fail. This happens even
if the first transaction has already committed. The engine will invalidate the second transaction and roll back all its
changes. If the setting is changed to false with the <set database transaction rollback on conflict
statement>, then the second transaction will just return an error without rolling back. The application must perform
an alternative statement to continue or roll back the transaction.
In the MVCC model, READ UNCOMMITTED is promoted to READ COMMITTED, as the new architecture is based
on multi-version rows for uncommitted data and more than one version may exist for some rows.
With MVCC, when a transaction only reads data, then it will go ahead and complete regardless of what other
transactions may do. This does not depend on the transaction being read-only or the isolation modes.
37
MVCC model, HyperSQL treats a REPEATABLE READ or SERIALIZABLE setting for a transaction as SNAPSHOT
ISOLATION.
All modes can be used with as many simultaneous connections as required. The default 2PL model is fine for
applications with a single connection, or applications that do not access the same tables heavily for writes. With
multiple simultaneous connections, MVCC can be used for most applications. Both READ CONSISTENCY and
SNAPSHOT ISOLATION levels are stronger than the corresponding READ COMMITTED level in the 2PL mode.
Some applications require SERIALIZABLE transactions for at least some of their operations. For these applications,
one of the 2PL modes can be used. It is possible to switch the concurrency model while the database is operational.
Therefore, the model can be changed for the duration of some special operations, such as synchronization with another
data source or performing bulk changes to table contents.
All concurrency models are very fast in operation. When data change operations are mainly on the same tables, the
MVCC model may be faster, especially with multi-core processors.
Viewing Sessions
As HyperSQL is multithreaded, you can view the current sessions and their state from any admin session. The
INFORMATION_SCHEMA.SYSTEM_SESSIONS table contains the list of open sessions, their unique ids and the
38
statement currently executed or waiting to be executed by each session. For each session, it displays the list of sessions
that are waiting for it to commit, or the session that this session is waiting for.
statement>
::=
START
TRANSACTION
<transaction
Start an SQL transaction and set its characteristics. All transactional SQL statements start a transaction automatically,
therefore using this statement is not necessary. If the statement is called in the middle of a transaction, an exception
is thrown.
SET TRANSACTION
set next transaction characteristics
<set transaction
characteristics>
statement>
::=
SET
39
LOCAL
TRANSACTION
<transaction
Set the characteristics of the next transaction in the current session. This statement has an effect only on the next
transactions and has no effect on the future transactions after the next.
transaction characteristics
transaction characteristics
<transaction characteristics> ::= [ <transaction mode> [ { <comma> <transaction
mode> }... ] ]
<transaction mode>
<diagnostics size>
::=
<isolation
level>
<transaction
access
mode>
SET CONSTRAINTS
set constraints mode statement
<set constraints mode statement> ::= SET CONSTRAINTS <constraint name list>
{ DEFERRED | IMMEDIATE }
<constraint name list> ::= ALL | <constraint name> [ { <comma> <constraint
name> }... ]
If the statement is issued during a transaction, it applies to the rest of the current transaction. If the statement is issued
when a transaction is not active then it applies only to the next transaction in the current session. HyperSQL does not
yet support this feature.
LOCK TABLE
lock table statement
<lock table statement> ::= LOCK TABLE <table name> { READ | WRITE} [, <table
name> { READ | WRITE} ...]}
In some circumstances, where multiple simultaneous transactions are in progress, it may be necessary to ensure a
transaction consisting of several statements is completed, without being terminated due to possible deadlock. When
this statement is executed, it waits until it can obtain all the listed locks, then returns. If obtaining the locks would
result in a deadlock an error is raised. The SQL statements following this statements use the locks already obtained
40
(and obtain new locks if necessary) and can proceed without waiting. All the locks are released when a COMMIT or
ROLLBACK statement is issued.
When the isolation level of a session is READ COMMITTED, read locks are released immediately after the execution
of the statement, therefore you should use only WRITE locks in this mode. Alternatively, you can switch to the
SERIALIZABLE isolation mode before locking the tables for the specific transaction that needs to finish consistently
and without a deadlock. It is best to execute this statement at the beginning of the transaction with the complete list
of required read and write locks.
Currently, this command does not have any effect when the database transaction control model is MVCC.
SAVEPOINT
savepoint statement
<savepoint statement> ::= SAVEPOINT <savepoint specifier>
<savepoint specifier> ::= <savepoint name>
Establish a savepoint. This command is used during an SQL transaction. It establishes a milestone for the current
transaction. The SAVEPOINT can be used at a later point in the transaction to rollback the transaction to the milestone.
RELEASE SAVEPOINT
release savepoint statement
<release savepoint statement> ::= RELEASE SAVEPOINT <savepoint specifier>
Destroy a savepoint. This command is rarely used as it is not very useful. It removes a SAVEPOINT that has already
been defined.
COMMIT
commit statement
<commit statement> ::= COMMIT [ WORK ] [ AND [ NO ] CHAIN ]
Terminate the current SQL-transaction with commit. This make all the changes to the database permanent.
ROLLBACK
rollback statement
<rollback statement> ::= ROLLBACK [ WORK ] [ AND [ NO ] CHAIN ]
Rollback the current SQL transaction and terminate it. The statement rolls back all the actions performed during the
transaction. If NO CHAIN is specified, a new SQL transaction is started just after the rollback. The new transaction
inherits the properties of the old transaction.
ROLLBACK TO SAVEPOINT
rollback statement
41
DISCONNECT
disconnect statement
<disconnect statement> ::= DISCONNECT
Terminate the current SQL session. Closing a JDBC connection has the same effect as this command.
SET SESSION CHARACTERISTICS
set session characteristics statement
<set session characteristics statement> ::= SET SESSION CHARACTERISTICS AS
<session characteristic list>
<session characteristic list> ::= <session characteristic> [ { <comma> <session
characteristic> }... ]
<session characteristic> ::= <session transaction characteristics>
<session transaction characteristics>
[ { <comma> <transaction mode> }... ]
::=
TRANSACTION
<transaction
mode>
Set one or more characteristics for the current SQL-session. This command is used to set the transaction mode for the
session. This endures for all transactions until the session is closed or the next use of this command. The current readonly mode can be accessed with the ISREADONLY() function.
42
are executed with the privileges of the new user. The current authorisation can be accessed with the CURRENT_USER
and SESSION_USER functions.
SET ROLE
set role statement
<set role statement> ::= SET ROLE <role specification>
<role specification> ::= <value specification> | NONE
Set the SQL-session role name and the current role name for the current SQL-session context. The user that executes
this command must have the specified role. If NONE is specified, then the previous CURRENT_ROLE is eliminated.
The effect of this lasts for the lifetime of the session. The current role can be accessed with the CURRENT_ROLE
function.
SET TIME ZONE
set local time zone statement
<set local time zone statement> ::= SET TIME ZONE <set time zone value>
<set time zone value> ::= <interval value expression> | LOCAL
Set the current default time zone displacement for the current SQL-session. When the session starts, the time zone
displacement is set to the time zone of the client. This command changes the time zone displacement. The effect of
this lasts for the lifetime of the session. If LOCAL is specified, the time zone displacement reverts to the local time
zone of the session.
SET CATALOG
set catalog statement
<set catalog statement> ::= SET <catalog name characteristic>
<catalog name characteristic> ::= CATALOG <value specification>
Set the default schema name for unqualified names used in SQL statements that are prepared or executed directly in
the current sessions. As there is only one catalog in the database, only the name of this catalog can be used. The current
catalog can be accessed with the CURRENT_CATALOG function.
SET SCHEMA
set schema statement
<set schema statement> ::= SET <schema name characteristic>
43
44
45
Overview
The persistent elements of an SQL environment are database objects. The database consists of catalogs plus
authorizations.
A catalog contains schemas, and schemas contain the objects that contain data or govern the data.
Each catalog contains a special schema called INFORMATION_SCHEMA. This schema is read-only and contains
some views and other schema objects. The views contain lists of all the database objects that exist within the catalog,
plus all authorizations.
Each database object has a name. A name is an identifier and is unique within its name-space.
46
A new HyperSQL catalog contains an empty schema called PUBLIC. By default, this schema is the initial schema
when a new session is started. New schemas and schema objects can be defined and used in the PUBLIC schema, as
well as any new schema that is created by the user. You can rename the PUBLIC schema.
HyperSQL allows all schemas to be dropped, except the schema that is the default initial schema for new sessions
(by default, the PUBLIC schema). For this schema, a DROP SCHEMA ... CASCADE statement will succeed but will
result in an empty schema, rather than no schema.
The statements for setting the initial schema for users are described in the Statements for Authorization and Access
Control chapter.
Character Sets
A CHARACTER SET is the whole or a subset of the UNICODE character set.
A character set name can only be a <regular identifier>. There is a separate name-space for character sets.
There are several predefined character sets. These character sets belong to INFORMATION_SCHEMA. However,
when they are referenced in a statement, no schema prefix is necessary.
The following character sets, together with some others, have been specified by the SQL Standard:
SQL_TEXT, SQL_IDENTIFIER, SQL_CHARACTER
The SQL_CHARACTER consists of ASCII letters, digits and the symbols used in the SQL language. SQL_TEXT and
SQL_IDENTIFIER are implementation defined. HyperSQL defines SQL_TEXT as the UNICODE character set and
SQL_IDENTIFIER as the UNICODE character set minus the SQL language special characters.
SQL_TEXT consists of the full set of Unicode characters. These characters can be used in strings and clobs stored in the
database. The character repertoire of HyperSQL is the UTF16 character set, which covers all possible character sets.
If a predefined character set is specified for a table column, then any string stored in the column must contain only
characters from the specified character set. HyperSQL does not enforce the CHARACTER SET that is specified for
a column and may accept any character string supported by SQL_TEXT.
47
Collations
A COLLATION is the method used for ordering character strings in ordered sets and to determine equivalence of
two character strings.
The system collation is called SQL_TEXT. This collation sorts according to the Unicode code of the characters,
UNICODE_SIMPLE. The system collation is always used for INFORMATION_SCHEMA tables.
The default database collation is the same as the system collation. You can change this default, either with a language
collation, or with the SQL_TEXT_UCC. This collation is a case-insensitive form of the UNICODE_SIMPLE collation.
Collations for a large number of languages are supported by HyperSQL. These collations belong to
INFORMATION_SCHEMA. However, when they are referenced in a statement, there is no need for a schema prefix.
A different collation than the default collation can be specified for each table column that is defined as CHAR or
VARCHAR.
A collation can also be used in an ORDER BY clause.
A collation can be used in the GROUP BY clause.
CREATE TABLE t (id INTEGER PRIMARY KEY, name VARCHAR(20) COLLATE "English")
SELECT * FROM t ORDER BY name COLLATE "French"
SELECT COUNT(*), name FROM t GROUP BY name COLLATE "English 0"
In the examples above, the collation for the column is already specified when it is defined. In the first SELECT
statement, the column is sorted using the French collation. In the second SELECT, the "English 0" collation is
used in the GROUP BY clause. This collation is case insensitive, so the same name with different uses of upper and
lower case letters is considered the same and counted together.
The supported collations are named according to the language. You can see the list in the
INFORMATION_SCHEMA.COLLATIONS view. You can use just the name in double quotes for the default form
of the collation. If you add a strength between 0, 1, 2, 3, the case sensitivity and accent sensitivity changes. The value
0 indicates least sensitivity to differences. At this strength the collation is case-insensitive and ignores differences
between accented letters. At strength 1, differences between accented letters are taken into account. At strength 2, both
case and accent are significant. Finally 3 indicates additional sensitivity to different punctuation. A second parameter
can also be used with values 0 or 1, to indicate how decomposition of accented characters for comparison is handled
for languages that support such characters. See the Java and ICU (International Components for Unicode) collation
documentation for more details on these values. For example, possible forms of the French collation are "French",
"French 0", "French 1", etc. and "French 2 1", etc. When the collation is specified without strength, it
seems the system defaults to strength 2, which is case and accent sensitive.
When a collation is not explicitly used in the CREATE TABLE statement for a column, then the database default
collation is used for this column. If you change the database default collation afterwards, the new collation will be used.
With the older versions of HyperSQL the special type VARCHAR_IGNORECASE was used as the column type for
case-insensitive comparison. Any column already defined as VARCHAR_IGNORECASE will be compared exactly
as before. In version 2.3.0 and later, this form is represented by the addition of UCC after the collation name, for
example "French UCC". You can still use the SET IGNORECASE TRUE statement in your session to force the UCC
to be applied to the collation for the VARCHAR columns of new tables. UCC stands for Upper Case Comparison.
Before comparing two strings, both are converted to uppercase using the current collation. This is exactly how
VARCHAR_IGNORECASE worked.
It is recommended to use the default SQL_TEXT collation for your general CHAR or VARCHAR columns. For
columns where a language collation is desirable, the choice should be made very carefully, because names that are
very similar but only differ in the accents may be considered equal in searches.
48
When comparing two strings, HyperSQL 2 pads the shorter string with spaces in order to compare two strings of
equal length. You can change the default database collation with one that does not pad the string with spaces before
comparison. This method of comparison was used in versions older than 2.
User defined collations can be created based on existing collations to control the space padding. These collations are
part of the current schema.
See the COLLATE keyword and SET DATABASE COLLATION statement in the System Management chapter.
The PAD SPACE or NO PAD clause is used to control padding.
Important
If you change the default collation of a database when there are tables containing data with CHAR or
VARCHAR columns that are part of an index, a primary key or a unique constraint, you must execute
SHUTDOWN COMPACT or SHUTDOWN SCRIPT after the change. If you do not do this, your queries
and other statements will show erratic behaviour and may result in unrecoverable errors.
Distinct Types
A distinct, user-defined TYPE is simply based on a built-in type. A distinct TYPE is used in table definitions and in
CAST statements.
Distinct types share a name-space with domains.
Domains
A DOMAIN is a user-defined type, simply based on a built-in type. A DOMAIN can have constraints that limit the
values that the DOMAIN can represent. A DOMAIN can be used in table definitions and in CAST statements.
Distinct types share a name-space with domains.
Number Sequences
A SEQUENCE object produces INTEGER values in sequence. The SEQUENCE can be referenced in special contexts
only within certain SQL statements. For each row where the object is referenced, its value is incremented.
There is a separate name-space for SEQUENCE objects.
IDENTITY columns are columns of tables which have an internal, unnamed SEQUENCE object. HyperSQL also
supports IDENTITY columns that use a named SEQUENCE object.
SEQUENCE objects and IDENTITY columns are supported fully according to the latest SQL 2008 Standard syntax.
Sequences
The SQL:2008 syntax and usage is different from what is supported by many existing database engines. Sequences
are created with the CREATE SEQUENCE command and their current value can be modified at any time with ALTER
SEQUENCE. The next value for a sequence is retrieved with the NEXT VALUE FOR <name> expression. This
expression can be used for inserting and updating table rows.
Example 4.1. inserting the next sequence value into a table row
INSERT INTO mytable VALUES 2, 'John', NEXT VALUE FOR mysequence
49
You can also use it in select statements. For example, if you want to number the returned rows of a SELECT in
sequential order, you can use:
In version 2.0, the semantics of sequences is exactly as defined by SQL:2008. If you use the same sequence twice in
the same row in an INSERT statement, you will get the same value as required by the Standard.
The correct way to use a sequence value is the NEXT VALUE FOR expression.
HyperSQL adds an extension to Standard SQL to return the last value returned by the NEXT VALUE FOR expression
in the current session. After a statement containing NEXT VALUE FOR is executed, the value that was returned for
NEXT VALUE FOR is available using the CURRENT VALUE FOR expression. In the example below, the NEXT
VALUE FOR expression is used to insert a new row. The value that was returned by NEXT VALUE FOR is retrieved
with the CURRENT VALUE FOR in the next insert statements to populate two new rows in a different table that has
a parent child relationship with the first table. For example if the value 15 was returned by the sequence, the same
value 15 is inserted in the three rows.
The INFORMATION_SCHEMA.SEQUENCES table contains the next value that will be returned from any of the
defined sequences. The SEQUENCE_NAME column contains the name and the NEXT_VALUE column contains the
next value to be returned. Note that this is only for getting information and you should not use it for accessing the next
sequence value. When multiple sessions access the same sequence, the value returned from this table by one session
could also be used by a different session, causing a sequence value to be used twice unintentionally.
Identity Auto-Increment Columns
Each table can contain a single auto-increment column, known as the IDENTITY column. An IDENTITY column is a
SMALLINT, INTEGER, BIGINT, DECIMAL or NUMERIC column with its value generated by a sequence generator.
In HyperSQL 2.0, an IDENTITY column is not by default treated as the primary key for the table (as a result, multicolumn primary keys are possible with an IDENTITY column present). Use the SQL standard syntax for declaration
of the IDENTITY column.
The SQL standard syntax is used, which allows the initial value and other options to be specified.
<colname> [ INTEGER | BIGINT | DECIMAL | NUMERIC ] GENERATED { BY DEFAULT |
ALWAYS} AS IDENTITY [( <options> )]
/* this table has no primary key */
CREATE TABLE vals (id INTEGER GENERATED BY DEFAULT AS IDENTITY, data VARBINARY(2000))
/* in this table id becomes primary key because the old syntax is used - avoid this syntax */
CREATE TABLE vals (id INTEGER IDENTITY, data VARBINARY(2000))
/* use the standard syntax and explicity declare a primary key identity column */
CREATE TABLE vals (id INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, data
VARBINARY(2000))
When you add a new row to such a table using an INSERT INTO <tablename> ... statement, you can use
the DEFAULT keyword for the IDENTITY column, which results in an auto-generated value for the column.
50
The IDENTITY() function returns the last value inserted into any IDENTITY column by this session. Each session
manages this function call separately and is not affected by inserts in other sessions. Use CALL IDENTITY() as
an SQL statement to retrieve this value. If you want to use the value for a field in a child table, you can use INSERT
INTO <childtable> VALUES (...,IDENTITY(),...);. Both types of call to IDENTITY() must be
made before any additional update or insert statements are issued by the session.
In triggers and routines, the value returned by the IDENTITY() function is correct for the given context. For example,
if a call to a stored procedure inserts a row into a table, causing a new identity value to be generated, a call to
IDENTITY() inside the procedure will return the new identity, but a call outside the procedure will return the last
identity value that was generated before a call was made to the procedure.
The last inserted IDENTITY value can also be retrieved via JDBC, by specifying the Statement or PreparedStatement
object to return the generated value.
The next IDENTITY value to be used can be changed with the following statement. Note that this statement is not
used in normal operation and is only for special purposes, for example resetting the identity generator:
ALTER TABLE ALTER COLUMN <column name> RESTART WITH <new value>;
For backward compatibility, support has been retained for CREATE TABLE <tablename>(<colname>
IDENTITY, ...) as a shortcut which defines the column both as an IDENTITY column and a PRIMARY KEY
column. Also, for backward compatibility, it is possible to use NULL as the value of an IDENTITY column in an
INSERT statement and the value will be generated automatically. You should avoid these compatibility features as
they may be removed from future versions of HyperSQL.
In the following example, the identity value for the first INSERT statement is generated automatically using the
DEFAULT keyword. The second INSERT statement uses a call to the IDENTITY() function to populate a row in the
child table with the generated identity value.
CREATE TABLE star (id INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
firstname VARCHAR(20),
lastname VARCHAR(20))
CREATE TABLE movies (starid INTEGER, movieid INTEGER PRIMARY KEY, title VARCHAR(40))
INSERT INTO star (id, firstname, lastname) VALUES (DEFAULT, 'Felix', 'the Cat')
INSERT INTO movies (starid, movieid, title) VALUES (IDENTITY(), 10, 'Felix in Hollywood')
HyperSQL 2.1 also supports IDENTITY columns that use an external, named SEQUENCE object. This feature is not
part of the SQL Standard. The example below uses this type of IDENTITY. Note the use of CURRENT VALUE FOR
seq here is multi-session safe. The returned value is the last value used by this session when the row was inserted
into the star table. This value is available until the transaction is committed. After commit, NULL is returned by the
CURRENT VALUE FOR expression until the SEQUENCE is used again.
CREATE SEQUENCE seq
CREATE TABLE star (id INTEGER GENERATED BY DEFAULT AS SEQUENCE seq PRIMARY KEY,
firstname VARCHAR(20),
lastname VARCHAR(20))
CREATE TABLE movies (starid INTEGER, movieid INTEGER PRIMARY KEY, title VARCHAR(40))
INSERT INTO star (id, firstname, lastname) VALUES (DEFAULT, 'Felix', 'the Cat')
INSERT INTO movies (starid, movieid, title) VALUES (CURRENT VALUE FOR seq, 10, 'Felix in
Hollywood')
Tables
In the SQL environment, tables are the most essential components, as they hold all persistent data.
If TABLE is considered as metadata (i.e. without its actual data) it is called a relation in relational theory. It has one or
more columns, with each column having a distinct name and a data type. A table usually has one or more constraints
which limit the values that can potentially be stored in the TABLE. These constraints are discussed in the next section.
51
A single column of the table can be defined as IDENTITY. The values stored in this column are auto-generated and
are based on an (unnamed) identity sequence, or optionally, a named SEQUENCE object.
Views
A VIEW is similar to a TABLE but it does not permanently contain rows of data. A view is defined as a QUERY
EXPRESSION, which is often a SELECT statement that references views and tables, but it can also consist of a
TABLE CONSTRUCTOR that does not reference any tables or views.
A view has many uses:
Hide the structure and column names of tables. The view can represent one or more tables or views as a separate
table. This can include aggregate data, such as sums and averages, from other tables.
Allow access to specific rows in a table. For example, allow access to records that were added since a given date,
while hiding older records.
Allow access to specific columns. For example allow access to columns that contain non-confidential information.
Note that this can also be achieved with the GRANT SELECT statement, using column-level privileges
A VIEW that returns the columns of a single ordinary TABLE is updatable if the query expression of the view is an
updatable query expression as discussed in the Data Access and Change chapter. Some updatable views are insertableinto because the query expression is insertable-into. In these views, each column of the query expressions must be a
column of the underlying table and those columns of the underlying table that are not in the view must have a default
clause, or be an IDENTITY or GENERATED column. When rows of an updatable view are updated, or new rows are
inserted, or rows are deleted, these changes are reflected in the base table. A VIEW definition may specify that the
inserted or updated rows conform to the search condition of the view. This is done with the CHECK OPTION clause.
A view that is not updatable according to the above paragraph can be made updatable or insertable-into by adding
INSTEAD OF triggers to the view. These triggers contain statements to use the submitted data to modify the contents
of the underlying tables of the view separately. For example, a view that represents a SELECT statements that joins
two tables can have an INSTEAD OF DELETE trigger with two DELETE statements, one for each table. Views that
have an INSTEAD OF trigger are called TRIGGER INSERTABLE, TRIGGER UPDATABLE, etc. according to the
triggers that have been defined.
Views share a name-space with tables.
Constraints
A CONSTRAINT is a child schema object and can belong to a DOMAIN or a TABLE. CONSTRAINT objects can be
defined without specifying a name. In this case the system generates a name for the new object beginning with "SYS_".
In a DOMAIN, CHECK constraints can be defined that limits the value represented by the DOMAIN. These constraints
work exactly like a CHECK constraint on a single column of a table as described below.
In a TABLE, a constraint takes three basic forms.
CHECK
A CHECK constraint consists of a <search condition> that must not be false (can be unknown) for each row
of the table. The <search condition> can reference all the columns of the current row, and if it contains a
<subquery>, other tables and views in the database (excluding its own table).
NOT NULL
52
A simple form of check constraint is the NOT NULL constraint, which applies to a single column.
UNIQUE
A UNIQUE constraint is based on an equality comparison of values of specific columns (taken together) of one row
with the same values from each of the other rows. The result of the comparison must never be true (can be false or
unknown). If a row of the table has NULL in any of the columns of the constraint, it conforms to the constraint. A
unique constraint on multiple columns (c1, c2, c3, ..) means that in no two rows, the sets of values for the columns can
be equal unless at lease one of them is NULL. Each single column taken by itself can have repeat values in different
rows. The following example satisfies a UNIQUE constraint on the two columns
2
1
2
1
1
NULL
NULL
NULL
If the SET DATABASE SQL UNIQUE NULLS FALSE has been set, then if not all the values set of columns are null,
the not null values are compared and it is disallowed to insert identical rows that contain at least one not-null value.
PRIMARY KEY
A PRIMARY KEY constraint is equivalent to a UNIQUE constraint on one or more NOT NULL columns. Only one
PRIMARY KEY can be defined in each table.
FOREIGN KEY
A FOREIGN key constraint is based on an equality comparison between values of specific columns (taken together)
of each row with the values of the columns of a UNIQUE constraint on another table or the same table. The result
of the comparison must never be false (can be unknown). A special form of FOREIGN KEY constraint, based on its
CHECK clause, allows the result to be unknown only if the values for all columns are NULL. A FOREIGN key can
be declared only if a UNIQUE constraint exists on the referenced columns.
Constraints share a name space with assertions.
Assertions
An ASSERTION is a top-level schema objects. It consists of a <search condition> that must not be false (can
be unknown). HyperSQL does not yet support assertions.
Assertions share a name-space with constraints
Triggers
A TRIGGER is a child schema object that always belongs to a TABLE or a VIEW.
Each time a DELETE, UPDATE or INSERT is performed on the table or view, additional actions are taken by the
triggers that have been declared on the table or view.
Triggers are discussed in detail in Triggers chapter.
53
Routines
Routines are user-defined functions or procedures. The names and usage of functions and procedures are different.
FUNCTION is a routine that can be referenced in many types of statements. PROCEDURE is a routine that can be
referenced only in a CALL statement.
There is a separate name-space for routines.
Because of the possibility of overloading, each routine can have more than one name. The name of the routine is
the same for all overloaded variants, but each variant has a specific name, different from all other routine names and
specific names in the schema. The specific name can be specified in the routine definition statement. Otherwise it is
assigned by the engine. The specific name is used only for schema manipulation statements, which need to reference a
specific variant of the routine. For example, if a routine has two signatures, each signature has its own specific name.
This allows the user to drop one of the signatures while keeping the other.
Routines are discussed in detail in chapter SQL-Invoked Routines .
Indexes
Indexes are an implementation-defined extension to the SQL Standard. HyperSQL has a dedicated name-space for
indexes in each schema.
Synonyms
Synonyms are user-defined names that refer to other schema objects. Synonyms can be defined for TABLE, VIEW,
SEQUENCE, PROCEDURE and FUNCTION names and used in SELECT, UPDATE, CALL, etc. statements. They
cannot be used in DDL statements. Synonym are in schemas, but they are used without a schema qualifier. When used,
a synonym is immediately translated to the target name and the target name is used in the actual statement. The access
privileges to the target object are checked.
CREATE SYNONYM REG FOR OTHER_SCHEMA.REGISTRATION_DETAIL_TABLE
SELECT R_ID, R_DATE FROM REG WHERE R_DATA > CURRENT_DATE - 3 DAY
A synonym cannot be the same as the name of any existing object in the schema.
54
55
Renaming Objects
RENAME
rename statement (HyperSQL)
<rename statement> ::= ALTER <object type> <name> RENAME TO <new name>
<object type> ::= CATALOG | SCHEMA | DOMAIN | TYPE | TABLE | CONSTRAINT | INDEX
| ROUTINE | SPECIFIC ROUTINE
<column rename statement> ::= ALTER TABLE <table name> ALTER COLUMN <name>
RENAME TO <new name>
This statement is used to rename an existing object. It is not part of the SQL Standard. The specified <name> is the
existing name, which can be qualified with a schema name, while the <new name> is the new name for the object.
Commenting Objects
COMMENT
comment statement (HyperSQL)
<comment statement> ::= COMMENT ON { TABLE | COLUMN | ROUTINE } <name> IS
<character string literal>
Adds a comment to the object metadata, which can later be read from an INFORMATION_SCHEMA view. This
command is not part of the SQL Standard. The strange syntax is due to compatibility with other database engines
56
that support the statement. The <name> is the name of a table, view, column or routine. The name of the column
consists of dot-separated <table name> . <column name>. The name of the table, view or routine can be
a simple name. All names can be qualified with a schema name. If there is already a comment on the object, the new
comment will replace it.
The comments appear in the results returned by JDBC DatabaseMetaData methods, getTables() and
getColumns(). The INFORMATION_SCHEMA.SYSTEM_COMMENTS view contains the comments. You can
query this view using the schema, table, and column names to retrieve the comments.
Schema Creation
CREATE SCHEMA
schema definition
The CREATE_SCHEMA or DBA role is required in order to create a schema. A schema can be created with or without
schema objects. Schema objects can always be added after creating the schema, or existing ones can be dropped.
Within the <schema definition> statement, all schema object creation takes place inside the newly created
schema. Therefore, if a schema name is specified for the schema objects, the name must match that of the new schema.
In addition to statements for creating schema objects, the statement can include instances of <grant statement>
and <role definition>. This is a curious aspect of the SQL standard, as these elements do not really belong
to schema creation.
<schema definition> ::= CREATE SCHEMA <schema name clause> [ <schema character
set specification> ] [ <schema element>... ]
<schema name clause> ::= <schema name> | AUTHORIZATION <authorization identifier>
| <schema name> AUTHORIZATION <authorization identifier>
If the name of the schema is specified simply as <schema name>, then the AUTHORIZATION is the current user.
Otherwise, the specified <authorization identifier> is used as the AUTHORIZATION for the schema.
If <schema name> is omitted, then the name of the schema is the same as the specified <authorization
identifier>.
<schema element> ::= <table definition> | <view definition> | <domain definition>
| <character set definition> | <collation definition> | <transliteration
definition> | <assertion definition> | <trigger definition> | <user-defined
type definition> | <user-defined cast definition> | <user-defined ordering
definition> | <transform definition> | <schema routine> | <sequence generator
definition> | <grant statement> | <role definition>
An example of the statement is given below. Note that a single semicolon appears at the end. There should be no
semicolon between the statements:
CREATE SCHEMA ACCOUNTS AUTHORIZATION DBA
CREATE TABLE AB(A INTEGER, ...)
CREATE TABLE CD(C CHAR(10), ...)
CREATE VIEW VI AS SELECT ...
GRANT SELECT ON AB TO PUBLIC
GRANT SELECT ON CD TO JOE;
It is not really necessary to create a schema and all its objects as one command. The schema can be created first, and
its objects can be created one by one.
DROP SCHEMA
drop schema statement
57
<drop schema statement> ::= DROP SCHEMA [ IF EXISTS ] <schema name> [ IF EXISTS ]
<drop behavior>
This command destroys an existing schema. If <drop behavior> is RESTRICT, the schema must be empty,
otherwise an error is raised. If CASCADE is specified as <drop behavior>, then all the objects contained in the
schema are destroyed with a CASCADE option.
Table Creation
CREATE TABLE
table definition
<table definition> ::= CREATE [ { <table scope> | <table type> } ] TABLE [ IF
NOT EXISTS ] <table name> <table contents source> [ ON COMMIT { PRESERVE |
DELETE } ROWS ]
<table scope> ::= { GLOBAL | LOCAL } TEMPORARY
<table type> :: = MEMORY | CACHED
<table contents source> ::= <table element list> | <as subquery clause>
<table element list> ::= <left paren> <table element> [ { <comma> <table
element> }... ] <right paren>
<table element> ::= <column definition> | <table constraint definition> | <like
clause>
like clause
A <like clause> copies all column definitions from another table into the newly created table. Its three options
indicate if the <default clause>, <identity column specification> and <generation clause>
associated with the column definitions are copied or not. If an option is not specified, it defaults to EXCLUDING. The
<generation clause> refers to columns that are generated by an expression but not to identity columns. All
NOT NULL constraints are copied with the original columns, other constraints are not. The <like clause> can
be used multiple times, allowing the new table to have copies of the column definitions of one or more other tables.
CREATE TABLE t (id INTEGER PRIMARY KEY, LIKE atable INCLUDING DEFAULTS EXCLUDING IDENTITY)
58
An <as subquery clause> used in table definition creates a table based on a <table subquery>. This kind
of table definition is similar to a view definition. If WITH DATA is specified, then the new table will contain the rows
of data returned by the <table subquery>.
CREATE TABLE t (a, b, c) AS (SELECT * FROM atable) WITH DATA
column definition
A column definition consists of a <column name> and in most cases a <data type> or <domain name>
as minimum. The other elements of <column definition> are optional. Each <column name> in a table
is unique.
<column definition> ::= <column name> [ <data type or domain name> ]
[ <default clause> | <identity column specification> | <identity column sequence
specification> | <generation clause> ] [ <update clause> ] [ <column constraint
definition>... ] [ <collate clause> ]
<data type or domain name> ::= <data type> | <domain name>
<column constraint definition> ::= [ <constraint name definition> ] <column
constraint> [ <constraint characteristics> ]
<column constraint> ::= NOT NULL | <unique
specification> | <check constraint definition>
specification>
<references
59
then generated by the sequence generators according to its rules. An identity column may or may not be the primary
key. Example below:
CREATE TABLE t (id INTEGER GENERATED ALWAYS AS IDENTITY(START WITH 100), name VARCHAR(20)
PRIMARY KEY)
The <identity column sequence specification> is used when the column values are based on a named
SEQUENCE object (which must already exist). Example below:
CREATE TABLE t (id INTEGER GENERATED BY DEFAULT AS SEQUENCE s, name VARCHAR(20) PRIMARY KEY)
Inserting rows is done in the same way for a named or unnamed sequence generator. In both cases, if no value is
specified to be inserted, or the DEFAULT keyword is used for the column, the value is generated by the sequence
generator. If a value is specified, this value is used if the column definition has the BY DEFAULT specification.
If the column definition has the ALWAYS specification, a value can be specified but the OVERRIDING SYSTEM
VALUES must be specified in the INSERT statement.
The other way in which the column value is autogenerated is by using the values of other columns in the same row.
This method is often used to create an index on a value that is derived from other column values.
<generation clause> ::= GENERATED ALWAYS AS <generation expression>
<generation expression> ::= <left paren> <value expression> <right paren>
The <generation clause> is used for special columns which represent values based on the values held in other
columns in the same row. The <value expression> must reference only other, non-generated, columns of the
table in the same row. Any function used in the expression must be deterministic and must not access SQL-data. No
<query expression> is allowed. When <generation clause> is used, <data type> must be specified.
A generated column can be part of a foreign key or unique constraints or a column of an index. This capability is the
main reason for using generated columns. A generated column may contain a formula that computes a value based
on the values of other columns. Fast searches of the computed value can be performed when an index is declared on
the generated column. Or the computed values can be declared to be unique, using a UNIQUE constraint on the table.
The computed column cannot be overridden by user supplied values. When a row is updated and the column values
change, the generated columns are computed with the new values.
When a row is inserted into a table, or an existing row is updated, no value except DEFAULT can be specified for a
generated column. In the example below, data is inserted into the non-generated columns and the generated column
will contain 'Felix the Cat' or 'Pink Panther'.
CREATE TABLE t (id INTEGER PRIMARY KEY,
firstname VARCHAR(20),
lastname VARCHAR(20),
fullname VARCHAR(40) GENERATED ALWAYS AS (firstname || ' ' || lastname))
INSERT INTO t (id, firstname, lastname) VALUES (1, 'Felix', 'the Cat')
INSERT INTO t (id, firstname, lastname, fullname) VALUES (2, 'Pink', 'Panther', DEFAULT)
DEFAULT
default clause
A default clause can be used if GENERATED is not specified. If a column has a <default clause> then it is
possible to insert a row into the table without specifying a value for the column.
<default clause> ::= DEFAULT <default option>
60
61
62
UPDATE is terminated with an exception. The RESTRICT option is similar and works exactly the same without
deferrable constraints (which are not allowed by HyperSQL). The other three options, CASCADE, SET NULL and
SET DEFAULT all allow the DELETE or UPDATE statement to complete. With DELETE statements the CASCADE
option results in the referencing rows to be deleted. With UPDATE statements, the changes to the values of the
referenced columns are copied to the referencing rows. With both DELETE or UPDATE statement, the SET NULL
option results in the columns of the referencing rows to be set to NULL. Similarly, the SET DEFAULT option results
in the columns of the referencing rows to be set to their default values.
CHECK
check constraint definition
<check constraint definition> ::= CHECK <left paren> <search condition> <right
paren>
A CHECK constraint can exist for a TABLE or for a DOMAIN. The <search condition> evaluates to an SQL
BOOLEAN value for each row of the table. Within the <search condition> all columns of the table row can
be referenced. For all rows of the table, the <search condition> evaluates to TRUE or UNKNOWN. When a
new row is inserted, or an existing row is updated, the <search condition> is evaluated and if it is FALSE,
the insert or update fails.
A CHECK constraint for a DOMAIN is similar. In its <search condition>, the term VALUE is used to represents
the value to which the DOMAIN applies.
CREATE TABLE t (a VARCHAR(20) CHECK (a IS NOT NULL AND CHARACTER_LENGTH(a) > 2))
The search condition of a CHECK constraint cannot contain any function that is not deterministic. A check constraint is
a data integrity constraint, therefore it must hold with respect to the rest of the data in the database. It cannot use values
that are temporal or ephemeral. For example CURRENT_USER is a function that returns different values depending on
who is using the database, or CURRENT_DATE changes day-to-day. Some temporal expressions are retrospectively
deterministic and are allowed in check constraints. For example, (CHECK VALUE < CURRENT_DATE) is valid,
because CURRENT_DATE will not move backwards in time, but (CHECK VALUE > CURRENT_DATE) is not
acceptable.
If you want to enforce the condition that a date value that is inserted into the database belongs to the future (at the time
of insertion), or any similar constraint, then use a TRIGGER with the desired condition.
DROP TABLE
drop table statement
<drop table statement> ::= DROP TABLE [ IF EXISTS ] <table name> [ IF EXISTS ]
<drop behavior>
Destroy a table. The default drop behaviour is RESTRICT and will cause the statement to fail if there is any view,
routine or foreign key constraint that references the table. If <drop behavior> is CASCADE, it causes all schema
objects that reference the table to drop. Referencing views are dropped. In the case of foreign key constraints that
reference the table, the constraint is dropped, rather than the TABLE or DOMAIN that contains it.
Table Manipulation
Table manipulation statements change the attributes of tables or modify the objects such as columns and constraints.
SET TABLE CLUSTERED
set table clustered property
63
<set table clustered statement> ::= SET TABLE <table name> CLUSTERED ON <left
paren> <column name list> <right paren>
Set the row clustering property of a table. The <column name list> is a list of column names that must correspond to
the columns of an existing PRIMARY KEY, UNIQUE or FOREIGN KEY index, or to the columns of a user defined
index. This statement is only valid for CACHED or TEXT tables.
Tables rows are stored in the database files as they are created, sometimes at the end of the file, sometimes in the
middle of the file. After a CHECKPOINT DEFRAG or SHUTDOWN COMPACT, the rows are reordered according
to the primary key of the table, or if there is no primary key, in no particular order.
When several consecutive rows of a table are retrieved during query execution it is more efficient to retrieve rows that
are stored adjacent to one another. After executing this command, nothing changes until a CHECKPOINT DEFRAG
or SHUTDOWN COMPACT or SHUTDOWN SCRIPT is performed. After these operations, the rows are stored in
the specified clustered order. The property is stored in the database and applies to all future reordering of rows. Note
that if extensive inserts or updates are performed on the tables, the rows will get out of order until the next reordering.
SET TABLE TYPE
set table type
<set table type statement> ::= SET TABLE <table name> TYPE { MEMORY | CACHED }
Changes the storage type of an existing table between CACHED and MEMORY types.
Only a user with the DBA role can execute this statement.
SET TABLE writeability
set table write property
<set table read only statement> ::= SET TABLE <table name> { READ ONLY | READ
WRITE }
Set the writeability property of a table. Tables are writeable by default. This statement can be used to change the
property between READ ONLY and READ WRITE. This is a feature of HyperSQL.
SET TABLE SOURCE
set table source statement
<set table source statement> ::= SET TABLE <table name> SOURCE <file and options>
[DESC]
<file and options>::= <doublequote> <file path> [<semicolon> <property>...]
<doublequote>
Set the text source for a text table. This statement cannot be used for tables that are not defined as TEXT TABLE.
Supported Properties
quoted = { true | false }
character encoding for text and character fields, for example, encoding=UTF-8.
UTF-16 cannot be used.
64
fs = <unquoted character>
field separator
vs = <unquoted character>
varchar separator
semicolon
\quote
quote
\space
space character
\apos
apostrophe
\n
\r
carriage return
\t
tab
\\
backslash
\u####
In the example below, the text source of the table is set to "myfile", the field separator to the pipe symbol, and the
varchar separator to the tilde symbol.
SET TABLE mytable SOURCE 'myfile;fs=|;vs=.;vs=~'
Only a user with the DBA role can execute this statement.
SET TABLE SOURCE HEADER
set table source header statement
<set table source header statement> ::= SET TABLE <table name> SOURCE HEADER
<header string>
Set the header for the text source for a text table. If this command is used, the <header string> is used as the
first line of the source file of the text table. This line is not part of the table data. Only a user with the DBA role can
execute this statement.
SET TABLE SOURCE on-off
set table source on-off statement
<set table source on-off statement> ::= SET TABLE <table name> SOURCE { ON | OFF }
65
Attach or detach a text table from its text source. This command does not change the properties or the name of the file
that is the source of a text table. When OFF is specified, the command detaches the table from its source and closes
the file for the source. In this state, it is not possible to read or write to the table. This allows the user to replace the
file with a different file, or delete it. When ON is specified, the source file is read. Only a user with the DBA role
can execute this statement
ALTER TABLE
alter table statement
<alter table statement> ::= ALTER TABLE <table name> <alter table action>
<alter table action> ::= <add column definition> | <alter column definition>
| <drop column definition> | <add table constraint definition> | <drop table
constraint definition>
Change the definition of a table. Specific types of this statement are covered below.
ADD COLUMN
add column definition
<add column definition> ::= ADD [ COLUMN ] <column definition> [ BEFORE <other
column name> ]
Add a column to an existing table. The <column definition> is specified the same way as it is used in <table
definition>. HyperSQL allows the use of [ BEFORE <other column name> ] to specify at which position
the new column is added to the table.
If the table contains rows, the new column must have a <default clause> or use one of the forms of
GENERATED. The column values for each row is then filled with the result of the <default clause> or the
generated value.
DROP COLUMN
drop column definition
<drop column definition> ::= DROP [ COLUMN ] <column name> <drop behavior>
Destroy a column of a base table. The <drop behavior> is either RESTRICT or CASCADE. If the column is
referenced in a table constraint that references other columns as well as this column, or if the column is referenced
in a VIEW, or the column is referenced in a TRIGGER, then the statement will fail if RESTRICT is specified. If
CASCADE is specified, then any CONSTRAINT, VIEW or TRIGGER object that references the column is dropped
with a cascading effect.
ADD CONSTRAINT
add table constraint definition
<add table constraint definition> ::= ADD <table constraint definition>
Add a constraint to a table. The existing rows of the table must conform to the added constraint, otherwise the statement
will not succeed.
DROP CONSTRAINT
drop table constraint definition
66
<drop table constraint definition> ::= DROP CONSTRAINT <constraint name> <drop
behavior>
Destroy a constraint on a table. The <drop behavior> has an effect only on UNIQUE and PRIMARY KEY
constraints. If such a constraint is referenced by a FOREIGN KEY constraint, the FOREIGN KEY constraint will be
dropped if CASCADE is specified. If the columns of such a constraint are used in a GROUP BY clause in the query
expression of a VIEW or another kind of schema object, and a functional dependency relationship exists between these
columns and the other columns in that query expression, then the VIEW or other schema object will be dropped when
CASCADE is specified.
ALTER COLUMN
alter column definition
<alter column definition> ::= ALTER [ COLUMN ] <column name> <alter column
action>
<alter column action> ::= <set column default clause> | <drop column default
clause> | <alter column data type clause> | <alter identity column specification>
| <alter column nullability> | <alter column name> | <add column identity
specification> | <drop column identity specification>
Change a column and its definition. Specific types of this statement are covered below. See also the RENAME
statement above.
SET DEFAULT
set column default clause
<set column default clause> ::= SET <default clause>
Set the default clause for a column. This can be used if the column is not defined as GENERATED.
DROP DEFAULT
drop column default clause
<drop column default clause> ::= DROP DEFAULT
Drop the default clause from a column.
SET DATA TYPE
alter column data type clause
<alter column data type clause> ::= SET DATA TYPE <data type>
Change the declared type of a column. The latest SQL Standard allows only changes to type properties such as
maximum length, precision, or scale, and only changes that cause the property to enlarge. HyperSQL allows changing
the type if all the existing values can be cast into the new type without string truncation or loss of significant digits.
alter column add identity generator
alter column add identity generator
<add column identity generator> ::= <identity column specification>
Adds an identity specification to the column. The type of the column must be an integral type and the existing values
must not include nulls. This option is specific to HyperSQL
67
ALTER TABLE mytable ALTER COLUMN id GENERATED ALWAYS AS IDENTITY (START WITH 20000)
DROP GENERATED
drop column identity generator
<drop column identity specification> ::= DROP GENERATED
Removes the identity generator from a column. After executing this statement, the column values are no longer
generated automatically. This option is specific to HyperSQL
ALTER TABLE mytable ALTER COLUMN id DROP GENERATED
68
Some views are updatable. As covered elsewhere, an updatable view is based on a single table or updatable view.
For updatable views, the optional CHECK OPTION clause can be specified. If this option is specified, then if a row
of the view is updated or a new row is inserted into the view, then it should contain such values that the row would
be included in the view after the change. If WITH CASCADED CHECK OPTION is specified, then if the <query
expression> of the view references another view, then the search condition of the underlying view should also be
satisfied by the update or insert operation.
DROP VIEW
drop view statement
<drop view statement> ::= DROP VIEW [ IF EXISTS ] <table name> [ IF EXISTS ]
<drop behavior>
Destroy a view. The <drop behavior> is similar to dropping a table.
ALTER VIEW
alter view statement
<alter view statement> ::= ALTER VIEW <table name> <view specification> AS
<query expression> [ WITH [ CASCADED | LOCAL ] CHECK OPTION ]
Alter a view. The statement is otherwise identical to CREATE VIEW. The new definition replaces the old. If there
are database objects such as routines or views that reference the view, then these objects are recompiled with the new
view definition. If the new definition is not compatible, the statement fails.
ALTER DOMAIN
alter domain statement
<alter domain statement> ::= ALTER DOMAIN <domain name> <alter domain action>
<alter domain action> ::= <set domain default clause> | <drop domain default
clause> | <add domain constraint definition> | <drop domain constraint
definition>
69
Trigger Creation
CREATE TRIGGER
trigger definition
<trigger definition> ::= CREATE TRIGGER <trigger name> <trigger action time>
<trigger event> ON <table name> [ REFERENCING <transition table or variable
list> ] <triggered action>
<trigger action time> ::= BEFORE | AFTER | INSTEAD OF
70
71
Routine Creation
schema routine
SQL-invoked routine
<SQL-invoked routine> ::= <schema routine>
<schema routine> ::= <schema procedure> | <schema function>
<schema procedure> ::= CREATE <SQL-invoked procedure>
<schema function> ::= CREATE <SQL-invoked function>
<SQL-invoked procedure> ::= PROCEDURE <schema qualified routine name> <SQL
parameter declaration list> <routine characteristics> <routine body>
<SQL-invoked function> ::= { <function specification> | <method specification
designator> } <routine body>
<SQL parameter declaration list> ::= <left paren> [ <SQL parameter declaration>
[ { <comma> <SQL parameter declaration> }... ] ] <right paren>
<SQL parameter declaration> ::= [ <parameter mode> ] [ <SQL parameter name> ]
<parameter type> [ RESULT ]
<parameter mode> ::= IN | OUT | INOUT
<parameter type> ::= <data type>
<function specification> ::= FUNCTION <schema qualified routine name>
<SQL parameter declaration list> <returns clause> <routine characteristics>
[ <dispatch clause> ]
<method specification designator> ::= SPECIFIC METHOD <specific method name>
| [ INSTANCE | STATIC | CONSTRUCTOR ] METHOD <method name> <SQL parameter
declaration list> [ <returns clause> ] FOR <schema-resolved user-defined type
name>
<routine characteristics> ::= [ <routine characteristic>... ]
<routine characteristic> ::= <language clause> | <parameter style clause> |
SPECIFIC <specific name> | <deterministic characteristic> | <SQL-data access
indication> | <null-call clause> | <returned result sets characteristic> |
<savepoint level indication>
<savepoint level indication> ::= NEW SAVEPOINT LEVEL | OLD SAVEPOINT LEVEL
<returned result sets characteristic> ::= DYNAMIC RESULT SETS <maximum returned
result sets>
<parameter style clause> ::= PARAMETER STYLE <parameter style>
72
EXTERNAL
NAME
<external
routine
name>
73
Alter the characteristic and the body of an SQL-invoked routine. If RESTRICT is specified and the routine is already
used in a a different routine or view definition, an exception is raised. Altering the routine changes the implementation
without changing the parameters. Defining recursive SQL/PSM SQL functions is only possible by altering a nonrecursive routine body. An example is given in the SQL-Invoked Routines chapter.
An example is given below for a function defined as a Java method, then redefined as an SQL function.
CREATE FUNCTION zero_pad(x BIGINT, digits INT, maxsize INT)
RETURNS CHAR VARYING(100)
SPECIFIC zero_pad_01
NO SQL DETERMINISTIC
LANGUAGE JAVA
EXTERNAL NAME 'CLASSPATH:org.hsqldb.lib.StringUtil.toZeroPaddedString';
ALTER SPECIFIC ROUTINE zero_pad_01
LANGUAGE SQL
BEGIN ATOMIC
DECLARE str VARCHAR(128);
SET str = CAST(x AS VARCHAR(128));
SET str = SUBSTRING('0000000000000' FROM 1 FOR digits - CHAR_LENGTH(str)) + str;
return str;
END
DROP
drop routine statement
<drop routine statement> ::= DROP <specific routine designator> <drop behavior>
Destroy an SQL-invoked routine.
Sequence Creation
CREATE SEQUENCE
sequence generator definition
<sequence generator definition> ::= CREATE SEQUENCE [ IF NOT EXISTS ] <sequence
generator name> [ <sequence generator options> ]
<sequence generator options> ::= <sequence generator option> ...
<sequence generator option> ::= <sequence generator data type option> | <common
sequence generator options>
<common sequence generator options> ::= <common sequence generator option> ...
<common sequence generator option> ::= <sequence generator start with option>
| <basic sequence generator option>
<basic sequence generator option> ::= <sequence generator increment by option>
| <sequence generator maxvalue option> | <sequence generator minvalue option>
| <sequence generator cycle option>
<sequence generator data type option> ::= AS <data type>
<sequence generator start with option> ::= START WITH <sequence generator start
value>
74
75
76
77
78
"French" NO PAD, results in a French collation without padding. This collation can be used for sorting or for
individual columns of tables.
DROP COLLATION
drop collation statement
<drop collation statement> ::= DROP COLLATION <collation name> <drop behavior>
Destroy a collation. If the <drop behavior> is CASCADE, then all references to the collation revert to the default
collation that would be in force if the dropped collation was not specified.
CREATE TRANSLATION
transliteration definition
<transliteration definition> ::= CREATE TRANSLATION <transliteration name> FOR
<source character set specification> TO <target character set specification>
FROM <transliteration source>
<source character set specification> ::= <character set specification>
<target character set specification> ::= <character set specification>
<transliteration source> ::= <existing transliteration name> | <transliteration
routine>
<existing transliteration name> ::= <transliteration name>
<transliteration routine> ::= <specific routine designator>
Define a character transliteration. This feature may be supported in a future versions of HyperSQL.
DROP TRANSLATION
drop transliteration statement
<drop transliteration statement> ::= DROP TRANSLATION <transliteration name>
Destroy a character transliteration. This feature may be supported in a future versions of HyperSQL.
CREATE ASSERTION
assertion definition
<assertion definition> ::= CREATE ASSERTION <constraint name> CHECK <left paren>
<search condition> <right paren> [ <constraint characteristics> ]
Specify an integrity constraint. This feature may be supported in a future versions of HyperSQL.
DROP ASSERTION
drop assertion statement
<drop assertion
behavior> ]
statement>
::=
DROP
ASSERTION
<constraint
79
name>
<drop
Visibility of Information
Users with the special ADMIN role can see the full information on all database objects. Ordinary, non-admin users
can see information on the objects for which they have some privileges.
The rows returned to a non-admin user exclude objects on which the user has no privilege. The extent of the information
in visible rows varies with the user's privilege. For example, the owner of a VIEW can see the text of the view query,
but a user of the view cannot see this text. When a user cannot see the contents of some column, null is returned for
that column.
Name Information
The names of database objects are stored
INFORMATION_SCHEMA_CATALOG_NAME.
in
hierarchical
views.
The
top
level
view
is
Below this level, there is a group of views that covers authorizations and roles, without referencing schema objects.
These are AUTHORIZATIONS and ADMINSTRABLE_ROLE_AUTHORIZATIONS.
Also below the top level, there is the SCHEMATA view, which lists the schemas in the catalog.
80
The views that refer to top-level schema objects are divided by object type. These includes
ASSERTIONS, CHARACTER_SETS, COLLATIONS, DOMAINS, ROUTINES, SEQUENCES, TABLES,
USER_DEFINED_TYPES and VIEWS.
There are views that refer to objects that are dependent on the top-level schema objects. These include COLUMNS
and PARAMETERS, views for constraints, including CHECK_CONSTRAINTS, REFERENTIAL_CONSTRAINTS
and TABLE_CONSTRAINTS, and finally the TRIGGERS view.
The usage of each type of top-level object by another is covered by several views. For example
TRIGGER_SEQUENCE_USAGE or ROUTINE_TABLE_USAGE.
Several other views list the individual privileges owned or granted to each AUTHORIZATION. For example
ROLE_ROUTINE_GRANTS or TABLE_PRIVILEGES.
Product Information
A group of views, including SQL_IMPLEMENTATION_INFO, SQL_FEATURES, SQL_SIZING and others cover
the capabilities of HyperSQL in detail. These views hold static data and can be explored even when the database is
empty.
Operations Information
There are some HyperSQL custom views cover the current state of operation of the database. These include
SYSTEM_CACHEINFO, SYSTEM_SESSIONINFO and SYSTEM_SESSIONS views.
81
ASSERTIONS
Empty view as ASSERTION objects are not yet supported.
AUTHORIZATIONS
Top level information on USER and ROLE objects in the database
CHARACTER_SETS
List of supported CHARACTER SET objects
CHECK_CONSTRAINTS
Additional information specific to each CHECK constraint, including the search condition
CHECK_CONSTRAINT_ROUTINE_USAGE
Information on FUNCTION objects referenced in CHECK constraints search conditions
COLLATIONS
Information on collations supported by the database.
COLUMNS
Information on COLUMN objects in TABLE and VIEW definitions
COLUMN_COLUMN_USAGE
Information on references to COLUMN objects from other, GENERATED, COLUMN objects
COLUMN_DOMAIN_USAGE
Information on DOMAIN objects used in type definition of COLUMN objects
COLUMN_PRIVILEGES
Information on privileges on each COLUMN object, granted to different ROLE and USER authorizations
COLUMN_UDT_USAGE
Information on distinct TYPE objects used in type definition of COLUMN objects
CONSTRAINT_COLUMN_USAGE
Information on COLUMN objects referenced by CONSTRAINT objects in the database
CONSTRAINT_TABLE_USAGE
Information on TABLE and VIEW objects referenced by CONSTRAINT objects in the database
DATA_TYPE_PRIVILEGES
Information on top level schema objects of various kinds that reference TYPE objects
DOMAINS
Top level information on DOMAIN objects in the database.
82
DOMAIN_CONSTRAINTS
Information on CONSTRAINT definitions used for DOMAIN objects
ELEMENT_TYPES
Information on the type of elements of ARRAY used in database columns or routine parameters and return values
ENABLED_ROLES
Information on ROLE privileges enabled for the current session
INFORMATION_SCHEMA_CATALOG_NAME
Information on the single CATALOG object of the database
KEY_COLUMN_USAGE
Information on COLUMN objects of tables that are used by PRIMARY KEY, UNIQUE and FOREIGN KEY
constraints
PARAMETERS
Information on parameters of each FUNCTION or PROCEDURE
REFERENTIAL_CONSTRAINTS
Additional information on FOREIGN KEY constraints, including triggered action and name of UNIQUE constraint
they refer to
ROLE_AUTHORIZATION_DESCRIPTORS
ROLE_COLUMN_GRANTS
Information on privileges on COLUMN objects granted to or by the current session roles
ROLE_ROUTINE_GRANTS
Information on privileges on FUNCTION and PROCEDURE objects granted to or by the current session roles
ROLE_TABLE_GRANTS
Information on privileges on TABLE and VIEW objects granted to or by the current session roles
ROLE_UDT_GRANTS
Information on privileges on TYPE objects granted to or by the current session roles
ROLE_USAGE_GRANTS
Information on privileges on USAGE privileges granted to or by the current session roles
ROUTINE_COLUMN_USAGE
Information on COLUMN objects of different tables that are referenced in FUNCTION and PROCEDURE definitions
ROUTINE_JAR_USAGE
Information on JAR usage by Java language FUNCTION and PROCEDURE objects.
83
ROUTINE_PRIVILEGES
Information on EXECUTE privileges granted on PROCEDURE and FUNCTION objects
ROUTINE_ROUTINE_USAGE
Information on PROCEDURE and FUNCTION objects that are referenced in FUNCTION and PROCEDURE
definitions
ROUTINE_SEQUENCE_USAGE
Information on SEQUENCE objects that are referenced in FUNCTION and PROCEDURE definitions
ROUTINE_TABLE_USAGE
Information on TABLE and VIEW objects that are referenced in FUNCTION and PROCEDURE definitions
ROUTINES
Top level information on all PROCEDURE and FUNCTION objects in the database
SCHEMATA
Information on all the SCHEMA objects in the database
SEQUENCES
Information on SEQUENCE objects
SQL_FEATURES
List of all SQL:2008 standard features, including information on whether they are supported or not supported by
HyperSQL
SQL_IMPLEMENTATION_INFO
Information on name, capabilities and defaults of the database engine software.
SQL_PACKAGES
List of the SQL:2008 Standard packages, including information on whether they are supported or not supported by
HyperSQL
SQL_PARTS
List of the SQL:2008 Standard parts, including information on whether they are supported or not supported by
HyperSQL
SQL_SIZING
List of the SQL:2008 Standard maximum supported sizes for different features as supported by HyperSQL
SQL_SIZING_PROFILES
TABLES
Information on all TABLE and VIEW object, including the INFORMATION_SCHEMA views themselves
84
TABLE_CONSTRAINTS
Information on all table level constraints, including PRIMARY KEY, UNIQUE, FOREIGN KEY and CHECK
constraints
TABLE_PRIVILEGES
Information on privileges on TABLE and VIEW objects owned or given to the current user
TRANSLATIONS
TRIGGERED_UPDATE_COLUMNS
Information on columns that have been used in TRIGGER definitions in the ON UPDATE clause
TRIGGERS
Top level information on the TRIGGER definitions in the databases
TRIGGER_COLUMN_USAGE
Information on COLUMN objects that have been referenced in the body of TRIGGER definitions
TRIGGER_ROUTINE_USAGE
Information on FUNCTION and PROCEDURE objects that have been used in TRIGGER definitions
TRIGGER_SEQUENCE_USAGE
Information on SEQUENCE objects that been referenced in TRIGGER definitions
TRIGGER_TABLE_USAGE
Information on TABLE and VIEW objects that have been referenced in TRIGGER definitions
USAGE_PRIVILEGES
Information on USAGE privileges granted to or owned by the current user
USER_DEFINED_TYPES
Top level information on TYPE objects in the database
VIEWS
Top Level information on VIEW objects in the database
VIEW_COLUMN_USAGE
Information on COLUMN objects referenced in the query expressions of the VIEW objects
VIEW_ROUTINE_USAGE
Information on FUNCTION and PROCEDURE objects that have been used in the query expressions of the VIEW
objects
VIEW_TABLE_USAGE
Information on TABLE and VIEW objects that have been referenced in the query expressions of the VIEW objects
85
86
SYSTEM_SCHEMAS
For DatabaseMetaData.getSchemas
SYSTEM_SEQUENCES
SYSTEM_SESSIONINFO
Information on the settings and properties of the current session.
SYSTEM_SESSIONS
Information on all open sessions in the database (when used by a DBA user), or just the current session. Includes the
current transaction state of each session.
SYSTEM_TABLES
Information on tables and views for DatabaseMetaData.getTables
SYSTEM_TABLESTATS
Information on table spaces and cardinality for each table
SYSTEM_TABLETYPES
For DatabaseMetaData.getTableTypes
SYSTEM_TEXTTABLES
Information on the settings of each text table.
SYSTEM_TYPEINFO
For DatabaseMetaData.getTypeInfo
SYSTEM_UDTS
For DatabaseMetaData.getUDTs
SYSTEM_USERS
Contains the list of all users in the database (when used by a DBA user), or just the current user.
SYSTEM_VERSIONCOLUMNS
For DatabaseMetaData.getVersionColumns
87
Overview
Text Table support for HSQLDB was originally developed by Bob Preston independently from the Project.
Subsequently Bob joined the Project and incorporated this feature into version 1.7.0, with a number of enhancements,
especially the use of SQL commands for specifying the files used for Text Tables.
In a nutshell, Text Tables are CSV or other delimited files treated as SQL tables. Any ordinary CSV or other delimited
file can be used. The full range of SQL queries can be performed on these files, including SELECT, INSERT, UPDATE
and DELETE. Indexes and unique constraints can be set up, and foreign key constraints can be used to enforce
referential integrity between Text Tables themselves or with conventional tables.
The delimited file can be created by the engine, or an existing file can be used.
HyperSQL with Text Table support is the only comprehensive solution that employs the power of SQL and the
universal reach of JDBC to handle data stored in text files.
The Implementation
Definition of Tables
Text Tables are defined similarly to conventional tables with the added TEXT keyword.
CREATE TEXT TABLE <tablename> (<column definition> [<constraint definition>])
The table is at first empty and cannot be written to. An additional SET command specifies the file and the separator
character that the Text table uses. It assigns the file to the table.
SET TABLE <tablename> SOURCE <quoted_filename_and_options> [DESC]
88
Text Tables
Configuration
The default field separator is a comma (,). A different field separator can be specified within the SET TABLE SOURCE
statement. For example, to change the field separator for the table mytable to a vertical bar, place the following in the
SET TABLE SOURCE statement, for example:
SET TABLE mytable SOURCE "myfile;fs=|"
Since HSQLDB treats CHAR and VARCHAR strings the same, the ability to assign a different separator to the latter
is provided. When a different separator is assigned to a VARCHAR, it will terminate any CSV field of that type. For
example, if the first field is CHAR, and the second field VARCHAR, and the separator fs has been defined as the
pipe (|) and vs as the period (.) then the data in the CSV file for a row will look like:
First field data|Second field data.Third field data
This facility in effect offers an extra, special separator which can be used in addition to the global separator. The
following example shows how to change the default separator to the pipe (|), VARCHAR separator to the period (.)
within a SET TABLE SOURCE statement:
SET TABLE mytable SOURCE "myfile;fs=|;vs=."
semicolon
\quote
single-quote
\space
space character
\apos
apostrophe
\n
\r
carriage return
\t
tab
\\
backslash
89
Text Tables
\u####
Furthermore, HSQLDB provides csv file support with three additional boolean options: ignore_first, quoted
and all_quoted. The ignore_first option (default false) tells HSQLDB to ignore the first line in a file. This
option is used when the first line of the file contains column headings or other title information. The first line consists
of the characters before the first end-of-line symbol (line feed, carriage return, etc). It is simply set aside and not
processed. The all_quoted option (default false) tells the program that it should use quotes around all character
fields when writing to the source file. The quoted option (default true) uses quotes only when necessary to distinguish
a field that contains the separator character. It can be set to false to prevent the use of quoting altogether and treat quote
characters as normal characters. All these options may be specified within the SET TABLE SOURCE statement:
SET TABLE mytable SOURCE "myfile;ignore_first=true;all_quoted=true"
When the default options all_quoted= false and quoted=true are in force, fields that are written to a line
of the csv file will be quoted only if they contain the separator or the quote character. The quote character inside the
field is doubled when written out to the file. When all_quoted=false and quoted=false the quote character
is not doubled. With this option, it is not possible to insert any string containing the separator into the table, as it
would become impossible to distinguish from a separator. While reading an existing data source file, the program treats
each individual field separately. It determines that a field is quoted only if the first character is the quote character.
It interprets the rest of the field on this basis.
The character encoding for the source file is ASCII by default, which corresponds to the 8-bit ANSI character
set. To support UNICODE or source files prepared with different encodings this can be changed to UTF-8 or
any other encoding. The default is encoding=ASCII and the option encoding=UTF-8 or other supported
encodings can be used. From version 2.3.4, the two-byte-per-character encodings of UTF-16 are also supported. The
encoding=UTF-16BE is big-endian, while encoding=UTF-16LE is little-endian. The encoding=UTF-16 is
big-endian by default. This encoding reads a special Unicode character called BOM if it is present at the beginning of
an existing file and if this character indicates little-endian, the file is treated as such. Note HSQLDB does not write
a BOM character to the files it creates from scratch.
Finally, HSQLDB provides the ability to read a text file as READ ONLY, by placing the keyword "DESC" at the end
of the SET TABLE SOURCE statement:
SET TABLE mytable SOURCE "myfile" DESC
Text table source files are cached in memory. The maximum number of rows of data that are in memory at any time is
controlled by the cache_rows property. The default value for cache_rows is 1000 and can be changed by setting
the default database property .The cache_size property sets the maximum amount of memory used for each text
table. The default is 100 KB. The properties can be set for individual text tables. These properties do not control the
maximum size of each text table, which can be much larger. An example is given below:
SET TABLE mytable SOURCE
"myfile;ignore_first=true;all_quoted=true;cache_rows=10000;cache_size=1000"
The properties used in earlier versions, namely the textdb.cache_scale and the
textdb.cache_size_scale can still be used for backward compatibility, but the new properties are preferred.
Supported Properties
quoted = { true | false }
character encoding for text and character fields, for example, encoding=UTF-8.
UTF-16 cannot be used.
90
Text Tables
fs = <unquoted character>
field separator
vs = <unquoted character>
varchar separator
qc = <unquoted character>
quote character
Subsequently, mytable will be empty and read-only. However, the data source description will be preserved, and
the table can be re-connected to it with
SET TABLE mytable SOURCE ON
When a database is opened, if the source file for an existing text table is missing, the table remains disconnected from
its data source but the source description is preserved. This allows the missing source file to be added to the directory
and the table re-connected to it with the above command.
Disconnecting text tables from their source has several uses. While disconnected, the text source can be edited outside
HSQLDB, provided data integrity is respected. When large text sources are used, and several constraints or indexes
need to be created on the table, it is possible to disconnect the source during the creation of constraints and indexes
and reduce the time it takes to perform the operation.
91
Text Tables
Databases store in jars or as files on the classpath and opened with the res: protocol can reference read-only text
files. These files are opened as resources. The file path is an absolute path beginning with a forward slash.
Blank lines are allowed anywhere in the text file, and are ignored.
It is possible to define a primary key, identity column, unique, foreign key and check constraints for text tables.
When a table source file is used with the ignore_first=true option, the first, ignored line is replaced with
a blank line after a SHUTDOWN COMPACT, unless the SOURCE HEADER statement has been used.
An existing table source file may include CHARACTER fields that do not begin with the quote character but contain
instances of the quote character. These fields are read as literal strings. Alternatively, if any field begins with the
quote character, then it is interpreted as a quoted string that should end with the quote character and any instances
of the quote character within the string is doubled. When any field containing the quote character or the separator is
written out to the source file by the program, the field is enclosed in quote character and any instance of the quote
character inside the field is doubled.
Inserts or updates of CHARACTER type field values are allowed with strings that contains the linefeed or the
carriage return character. This feature is disabled when both quoted and all_quoted properties are false.
ALTER TABLE commands that add or drop columns or constraints (apart from check constraints) are not supported
with text tables that are connected to a source. First use the SET TABLE <name> SOURCE OFF, make the changes,
then turn the source ON.
Use the default setting (quoted=true) for selective quoting of fields. Those fields that need quoting are quoted, other
not.
Use the quoted=false setting to avoid quoting of fields completely. With this setting any quote character is considered
part of the text.
Use the all_quoted=true setting to force all fields to be quoted.
You can choose the quote character. The default is the double-quote character.
SHUTDOWN COMPACT results in a complete rewrite of text table sources that are open at the time. The settings
for quoted and all_quoted are applied for the rewrite.
92
Text Tables
all_quoted=false
ignore_first=false
encoding=ASCII
cache_rows=1000
cache_size=100
textdb.allow_full_path=false (a system property)
Transactions
Text tables fully support transactions. New or changed rows that have not been committed are not updated in the source
file. Therefore the source file always contains committed rows.
However, text tables are not as resilient to machine crashes as other types of tables. If the crash happens while the text
source is being written to, the text source may contain only some of the changes made during a committed transaction.
With other types of tables, additional mechanisms ensure the integrity of the data and this situation will not arise.
93
Overview
This chapter is about access control to database objects such as tables, inside the database engine. Other issues related
to security include user authentication, password complexity and secure connections are covered in the System
Management chapter and the HyperSQL Network Listeners (Servers) chapter.
Apart from schemas and their object, each HyperSQL catalog has USER and ROLE objects. These objects are
collectively called authorizations. Each AUTHORIZATION has some access rights on some of the schemas or the
objects they contain. The persistent elements of an SQL environment are database objects
Authorizations names are stored in the database in the case-normal form. When connecting to a database via JDBC,
the user name and password must match the case of this case-normal form.
When a user is created with the CREATE USER statement, if the user name is enclosed in double quotes, the exact
name is used as the case-normal form. But if it is not enclosed in double quotes, the name is converted to uppercase
and this uppercase version is stored in the database as the case-normal form.
94
Access Control
By default, the objects in a schema can only be accessed by the schema owner. The schema owner can grant access
rights on the objects to other users or roles.
authorization identifier
authorization identifier
<authorization identifier> ::= <role name> | <user name>
Authorization identifiers share the same name-space within the database. The same name cannot be used for a USER
and a ROLE.
95
Access Control
SYS User
the SYS user (HyperSQL-specific)
This user is automatically created with a new database and has the DBA role. This user name and its password are
defined in the connection properties when connecting to the new database to create the database. As this user, it is
possible to change the password, create other users and created new schema objects. The initial SYS user can be
dropped by another user that has the DBA role. As a result, there is always at least one SYS user in the database.
Access Rights
By default, the objects in a schema can only be accessed by the schema owner. But the schema owner can grant
privileges (access rights) on the objects to other users or roles.
Things can get far more complex, because the grant of privileges can be made WITH GRANT OPTION. In this case,
the role or user that has been granted the privilege can grant the privilege to other roles and users.
Privileges can also be revoked from users or roles.
The statements for granting and revoking privileges normally specify which privileges are granted or revoked.
However, there is a shortcut, ALL PRIVILEGES, which means all the privileges that the <grantor> has on the
schema object. The <grantor> is normally the CURRENT_USER of the session that issues the statement.
The user or role that is granted privileges is referred to as <grantee> for the granted privileges.
Table
For tables, including views, privileges can be granted with different degrees of granularity. It is possible to grant a
privilege on all columns of a table, or on specific columns of the table.
The DELETE privilege applies to the table, rather than its columns. It applies to all DELETE statements.
The SELECT, INSERT and UPDATE privileges may apply to all columns or to individual columns. These privileges
determine whether the <grantee> can execute SQL data statements on the table.
The SELECT privilege designates the columns that can be referenced in SELECT statements, as well as the columns
that are read in a DELETE or UPDATE statement, including the search condition.
The INSERT privilege designates the columns into which explicit values can be inserted. To be able to insert a row
into the table, the user must therefore have the INSERT privilege on the table, or at least all the columns that do not
have a default value.
The UPDATE privilege simply designates the table or the specific columns that can be updated.
96
Access Control
The REFERENCES privilege allows the <grantee> to define a FOREIGN KEY constraint on a different table,
which references the table or the specific columns designated for the REFERENCES privilege.
The TRIGGER privilege allows adding a trigger to the table.
Sequence, Type, Domain, Character Set, Collation, Transliteration,
For these objects, only USAGE can be granted. The USAGE privilege is needed when object is referenced directly
in an SQL statement.
Routine
For routines, including procedures or functions, only EXECUTE privilege can be granted. This privilege is needed
when the routine is used directly in an SQL statement.
Other Objects
Other objects such as constraints and assertions are not used directly and there is no grantable privilege that refers
to them.
97
Access Control
Only a user with the DBA role can execute this command.
ALTER USER ... SET INITIAL SCHEMA
set the initial schema for a user (HyperSQL)
<alter user set initial schema statement> ::= ALTER USER <user name> SET INITIAL
SCHEMA <schema name> | DEFAULT
Change the initial schema for a user. The initial schema is the schema used by default for SQL statements issued during
a session. If DEFAULT is used, the default initial schema for all users is used as the initial schema for the user. The
SET SCHEMA command allows the user to change the schema for the duration of the session.
Only a user with the DBA role can execute this statement.
ALTER USER ... SET LOCAL
set the user authentication as local (HyperSQL)
<alter user set local> ::= ALTER USER <user name> SET LOCAL { TRUE | FALSE }
Sets the authentication method for the user as local. This statement has an effect only when external authentication
with role names is enabled. In this method of authentication, users created in the database are ignored and an
external authentication mechanism, such as LDAP is used. This statement is used if you want to use local, password
authentication for a specific user.
Only a user with the DBA role can execute this statement.
SET PASSWORD
set password statement (HyperSQL)
<set password statement> ::= SET PASSWORD <password>
Set the password for the current user. <password> is a string enclosed with single quote characters and is casesensitive.
SET INITIAL SCHEMA
set the initial schema for the current user (HyperSQL)
<set initial schema statement> ::= SET INITIAL SCHEMA <schema name> | DEFAULT
Change the initial schema for the current user. The initial schema is the schema used by default for SQL statements
issued during a session. If DEFAULT is used, the default initial schema for all users is used as the initial schema for
the current user. The separate SET SCHEMA command allows the user to change the schema for the duration of the
session. See also the Sessions and Transactions chapter.
SET DATABASE DEFAULT INITIAL SCHEMA
set the default initial schema for all users (HyperSQL)
<set database default initial schema statement> ::= SET DATABASE DEFAULT INITIAL
SCHEMA <schema name>
Sets the initial schema for new users. This schema can later be changed with the <set initial schema
statement> command.
98
Access Control
CREATE ROLE
role definition
<role definition> ::= CREATE ROLE <role name> [ WITH ADMIN <grantor> ]
Defines a new role. Initially the role has no rights, except those of the PUBLIC role. Only a user with the DBA role
can execute this command.
DROP ROLE
drop role statement
<drop role statement> ::= DROP ROLE <role name>
Drop (destroy) a role. If the specified role is the authorization for a schema, the schema is destroyed. Only a user with
the DBA role can execute this statement.
GRANTED BY
grantor determination
GRANTED BY <grantor>
<grantor> ::= CURRENT_USER | CURRENT_ROLE
The authorization that is granting or revoking a role or privileges. The optional GRANTED BY <grantor> clause
can be used in various statements that perform GRANT or REVOKE actions. If the clause is not used, the authorization
is CURRENT_USER. Otherwise, it is the specified authorization.
GRANT
grant privilege statement
<grant privilege statement> ::= GRANT <privileges> TO <grantee> [ { <comma>
<grantee> }... ] [ WITH GRANT OPTION ] [ GRANTED BY <grantor> ]
Assign privileges on schema objects to roles or users. Each <grantee> is a role or a user. If [ WITH GRANT
OPTION ] is specified, then the <grantee> can assign the privileges to other <grantee> objects.
<privileges> ::= <object privileges> ON <object name>
<object name> ::= [ TABLE ] <table name> | DOMAIN <domain name> |
COLLATION <collation name> | CHARACTER SET <character set name> | TRANSLATION
<transliteration name> | TYPE <user-defined type name> | SEQUENCE <sequence
generator name> | <specific routine designator> | ROUTINE <routine name> |
FUNCTION <function name> | PROCEDURE <procedure name>
<object privileges> ::= ALL PRIVILEGES | <action> [ { <comma> <action> }... ]
<action> ::= SELECT | SELECT <left paren> <privilege column list> <right paren>
| DELETE | INSERT [ <left paren> <privilege column list> <right paren> ] | UPDATE
[ <left paren> <privilege column list> <right paren> ] | REFERENCES [ <left
paren> <privilege column list> <right paren> ] | USAGE | TRIGGER | EXECUTE
<privilege column list> ::= <column name list>
<grantee> ::= PUBLIC | <authorization identifier>
99
Access Control
The <object privileges> that can be used depend on the type of the <object name>. These are discussed
in the previous section. For a table, if <privilege column list> is not specified, then the privilege is granted
on the table, which includes all of its columns and any column that may be added to it in the future. For routines, the
name of the routine can be specified in two ways, either as the generic name as the specific name. HyperSQL allows
referencing all overloaded versions of a routine at the same time, using its name. This differs from the SQL Standard
which requires the use of <specific routine designator> to grant privileges separately on each different
signature of the routine.
Each <grantee> is the name of a role or a user. Examples of GRANT statement are given below:
GRANT
GRANT
GRANT
GRANT
GRANT
As mentioned in the general discussion, it is better to define a role for the collection of all the privileges required by
an application. This role is then granted to any user. If further changes are made to the privileges of this role, they are
automatically reflected in all the users that have the role.
GRANT
grant role statement
<grant role statement> ::= GRANT <role name> [ { <comma> <role name> }... ]
TO <grantee> [ { <comma> <grantee> }... ] [ WITH ADMIN OPTION ] [ GRANTED BY
<grantor> ]
Assign roles to roles or users. One or more roles can be assigned to one or more <grantee> objects. A <grantee>
is a user or a role. If the [ WITH ADMIN OPTION ] is specified, then each <grantee> can grant the newly
assigned roles to other grantees. An example of user and role creation with grants is given below:
CREATE USER appuser
CREATE ROLE approle
GRANT approle TO appuser
GRANT SELECT, UPDATE ON TABLE atable TO approle
GRANT USAGE ON SEQUENCE asequence to approle
GRANT EXECUTE ON ROUTINE aroutine TO approle
REVOKE privilege
revoke statement
<revoke privilege statement> ::= REVOKE [ GRANT OPTION FOR ] <privileges> FROM
<grantee> [ { <comma> <grantee> }... ] [ GRANTED BY <grantor> ] RESTRICT | CASCADE
Revoke privileges from a user or role.
REVOKE role
revoke role statement
<revoke role statement> ::= REVOKE [ ADMIN OPTION FOR ] <role revoked> [ { <comma>
<role revoked> }... ] FROM <grantee> [ { <comma> <grantee> }... ] [ GRANTED
BY <grantor> ] RESTRICT | CASCADE
<role revoked> ::= <role name>
100
Access Control
101
Overview
HyperSQL data access and data change statements are fully compatible with the latest SQL:2011 Standard. There
are a few extensions and some relaxation of rules, but these do not affect statements that are written to the Standard
syntax. There is full support for classic SQL, as specified by SQL-92, and many enhancements added in later versions
of the standard.
Navigation
A cursor is either scrollable or not. Scrollable cursors allow accessing rows by absolute or relative positioning. Noscroll cursors only allow moving to the next row. The cursor can be optionally declared with the SQL qualifiers
102
SCROLL, or NO SCROLL. The JDBC statement parameter can be specified as: TYPE_FORWARD_ONLY and
TYPE_SCROLL_INSENSITIVE. The JDBC type TYPE_SCROLL_SENSITIVE is not supported by HSQLDB.
The default is NO SCROLL or TYPE_FORWARD_ONLY.
When a JDBC ResultSet is opened, it is positioned before the first row. Using the next() method the position is
moved to the first row. While the ResultSet is positioned on a row, various getter methods can be used to access
the columns of the row.
Updatability
The result returned by some query expressions is updatable. HSQLDB supports core SQL updatability features, plus
some enhancements from the SQL optional features.
A query expression is updatable if it is a SELECT from a single underlying base table (or updatable view) either
directly or indirectly. A SELECT statement featuring DISTINCT or GROUP BY or FETCH, LIMIT, OFFSET is not
updatable. In an updatable query expression, one or more columns are updatable. An updatable column is a column that
can be traced directly to the underlying table. Therefore, columns that contain expressions are not updatable. Examples
of updatable query expressions are given below. The view V is updatable when its query expression is updatable. The
SELECT statement from this view is also updatable:
SELECT
SELECT
CREATE
SELECT
If a cursor is declared with the SQL qualifier, FOR UPDATE OF <column name list>, then only the stated
columns in the result set become updatable. If any of the stated columns is not actually updatable, then the cursor
declaration will not succeed.
If the SQL qualifier, FOR UPDATE is used, then all the updatable columns of the result set become updatable.
If a cursor is declared with FOR READ ONLY, then it is not updatable.
In HSQLDB, if FOR READ ONLY or FOR UPDATE is not used then all the updatable columns of the result set
become updatable. This relaxes the SQL standard rule that in this case limits updatability to only simply updatable
SELECT statements (where all columns are updatable).
In JDBC, CONCUR_READ_ONLY or CONCUR_UPDATABLE can be specified for the Statement parameter.
CONCUR_UPDATABLE is required if the returning ResultSet is to be updatable. If CONCUR_READ_ONLY, which
is the default, is used, then even an updatable ResultSet becomes read-only.
When a ResultSet is updatable, various setter methods can be used to modify the column values. The names of
the setter methods begin with "update". After all the updates on a row are done, the updateRow() method must be
called to finalise the row update.
An updatable ResultSet may or may not be insertable-into. In an insertable ResultSet, all columns of the result
are updatable and any column of the base table that is not in the result must be a generated column or have a default
value.
In the ResultSet object, a special pseudo-row, called the insert row, is used to populate values for insertion into the
ResultSet (and consequently, into the base table). The setter methods must be used on all the columns, followed
by a call to insertRow().
Individual rows from all updatable result sets can be deleted one at a time. The deleteRow() is called when the
ResultSet is positioned on a row.
103
While using an updatable ResultSet to modify data, it is recommended not to change the same data using another
ResultSet and not to execute SQL data change statements that modify the same data.
Sensitivity
The sensitivity of the cursor relates to visibility of changes made to the data by the same transaction but without using
the given cursor. While the result set is open, the same transaction may use statements such as INSERT or UPDATE,
and change the data of the tables from which the result set data is derived. A cursor is SENSITIVE if it reflects those
changes. It is INSENSITIVE if it ignores such changes. It is ASENSITIVE if behaviour is implementation dependent.
The SQL default is ASENSITIVE, i.e., implantation dependent.
In HSQLDB all cursors are INSENSITIVE. They do not reflect changes to the data made by other statements.
Holdability
A cursor is holdable if the result set is not automatically closed when the current transaction is committed. Holdability
can be specified in the cursor declaration using the SQL qualifiers WITH HOLD or WITHOUT HOLD.
In JDBC, holdability is specified using either of the following values for the Statement parameter:
HOLD_CURSORS_OVER_COMMIT, or CLOSE_CURSORS_AT_COMMIT.
The SQL default is WITHOUT HOLD.
The JDBC default for HSQLDB result sets is WITH HOLD for read-only result sets and WITHOUT HOLD for
updatable result sets.
If the holdability of a ResultSet is specified in a conflicting manner in the SQL statement and the JDBC
Statement object, the JDBC setting takes precedence.
Autocommit
The autocommit property of a connection is a feature of JDBC and ODBC and is not part of the SQL Standard.
In autocommit mode, all transactional statements are followed by an implicit commit. In autocommit mode, all
ResultSet objects are read-only and holdable.
JDBC Overview
The JDBC settings, ResultSet.CONCUR_READONLY and ResultSet.CONCUR_UPDATABLE are the available
alternatives for read-only or updatability. The default is ResultSet.CONCUR_READONLY.
The
JDBC
settings,
ResultSet.TYPE_FORWARD_ONLY,
ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.TYPE_SCROLL_SENSITIVE are the available alternatives for both scrollability (navigation) and
sensitivity. HyperSQL does not support ResultSet.TYPE_SCROLL_SENSITIVE. The two other alternatives can be
used for both updatable and read-only result sets.
The
JDBC
settings
ResultSet.CLOSE_CURSORS_AT_COMMIT
and
ResultSet.HOLD_CURSORS_OVER_COMMIT are the alternatives for the lifetime of the result set. The default is
ResultSet.CLOSE_CURSORS_AT_COMMIT. The other setting can only be used for read-only result sets.
Examples of creating statements for updatable result sets are given below:
Connection c = newConnection();
Statement st;
c.setAutoCommit(false);
st = c.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_UPDATABLE);
st = c.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE);
104
JDBC Parameters
When a JDBC PreparedStatement or CallableStatement is used with an SQL statement that contains dynamic
parameters, the data types of the parameters are resolved and determined by the engine when the statement is prepared.
The SQL Standard has detailed rules to determine the data types and imposes limits on the maximum length or precision
of the parameter. HyperSQL applies the standard rules with two exceptions for parameters with String and BigDecimal
Java types. HyperSQL ignores the limits when the parameter value is set, and only enforces the necessary limits
when the PreparedStatement is executed. In all other cases, parameter type limits are checked and enforced when the
parameter is set.
In the example below the setString() calls do not raise an exception, but one of the execute() statements does.
// table definition: CREATE TABLE T (NAME VARCHAR(12), ...)
Connection c = newConnection();
PreparedStatement st = c.prepareStatement("SELECT * FROM T WHERE NAME = ?");
// type of the parameter is VARCHAR(12), which limits length to 12 characters
st.setString(1, "Eyjafjallajokull"); // string is longer than type, but no exception is raised
here
set.execute(); // executes with no exception and does not find any rows
// but if an UPDATE is attempted, an exception is raised
st = c.prepareStatement("UPDATE T SET NAME = ? WHERE ID = 10");
st.setString(1, "Eyjafjallajokull"); // string is longer than type, but no exception is raised
here
st.execute(); // exception is thrown when HyperSQL checks the value for update
All of the above also applies to setting the values in new and updated rows in updatable ResultSet objects.
JDBC parameters can be set with any compatible type, as supported by the JDBC specification. For CLOB and BLOB
types, you can use streams, or create instances of BLOB or CLOB before assigning them to the parameters. You can
even use CLOB or BLOB objects returned from connections to other RDBMS servers. The Connection.createBlob()
and createClob() methods can be used to create the new LOBs. For very large LOB's the stream methods are preferable
as they use less memory.
For array parameters, you must use a java.sql.Array object that contains the array elements before assigning to
JDBC parameters. The Connection.createArrayOf(...) method can be used to create a new object, or you
can use an Array returned from connections to other RDBMS servers.
105
Procedures can also return one or more result sets. You should call the getResultSet() and getMoreResults() methods
to retrieve the result sets one by one.
SQL functions can also return a table. You can call such functions the same way as procedures and retrieve the table
as a ResultSet.
Cursor Declaration
The DECLARE CURSOR statement is used within an SQL PROCEDURE body. In the current version HyperSQL
2.3, the cursor is used only to return a result set from the procedure. Therefore the cursor must be declared WITH
RETURN and can only be READ ONLY.
DECLARE CURSOR
declare cursor statement
<declare cursor> ::= DECLARE <cursor name>
[ { SENSITIVE | INSENSITIVE | ASENSITIVE } ] [ { SCROLL | NO SCROLL } ]
CURSOR [ { WITH HOLD | WITHOUT HOLD } ] [ { WITH RETURN | WITHOUT RETURN } ]
FOR <query expression>
[ FOR { READ ONLY | UPDATE [ OF <column name list> ] } ]
The query expression is a SELECT statement or similar, and is discussed in the rest of this chapter. In the example
below a cursor is declared for a SELECT statement. It is later opened to create the result set. The cursor is specified
WITHOUT HOLD, so the result set is not kept after a commit. Use WITH HOLD to keep the result set. Note that you
need to declare the cursor WITH RETURN as it is returned by the CallableStatement.
DECLARE thiscursor SCROLL CURSOR WITHOUT HOLD WITH RETURN FOR SELECT * FROM
INFORMATION_SCHEMA.TABLES;
-OPEN thiscursor;
Syntax Elements
The syntax elements that can be used in data access and data change statements are described in this section. The SQL
Standard has a very extensive set of definitions for these elements. The BNF definitions given here are sometimes
simplified.
Literals
Literals are used to express constant values. The general type of a literal is known by its format. The specific type
is based on conventions.
106
::=
UESCAPE
<quote><Unicode
escape
<Unicode escape value> ::= <Unicode 4 digit escape value> | <Unicode 6 digit
escape value> | <Unicode character escape value>
<Unicode
4
digit
escape
character><hexit><hexit><hexit><hexit>
value>
::=
<Unicode
escape
escape
character><plus
sign>
<quote>
<quote>
[
[
<character
<character
<Unicode
character
string
literal>
::=
[
<introducer><character
set
specification> ] U<ampersand><quote> [ <Unicode representation>... ] <quote>
[ { <separator> <quote> [ <Unicode representation>... ] <quote> }... ] <Unicode
escape specifier>
<Unicode representation> ::= <character representation> | <Unicode escape value>
The type of a character literal is CHARACTER. The length of the string literal is the character length of the type. If
the quote character is used in a string, it is represented with two quote characters. Long literals can be divided into
multiple quoted strings, separated with a space or end-of-line character.
Unicode literals start with U& and can contain ordinary characters and unicode escapes. A unicode escape begins with
the backslash ( \ ) character and is followed by four hexadecimal characters which specify the character code.
Example of character literals are given below:
107
binary literal
binary literal
<binary string literal> ::= X <quote> [ <space>... ] [ { <hexit> [ <space>... ]
<hexit> [ <space>... ] }... ] <quote> [ { <separator> <quote> [ <space>... ]
[ { <hexit> [ <space>... ] <hexit> [ <space>... ] }... ] <quote> }... ]
<hexit> ::= <digit> | A | B | C | D | E | F | a | b | c | d | e | f
The type of a binary literal is BINARY. The octet length of the binary literal is the length of the type. Case-insensitive
hexadecimal characters are used in the binary string. Each pair of characters in the literal represents a byte in the binary
string. Long literals can be divided into multiple quoted strings, separated with a space or end-of-line character.
X'1abACD34' 'Af'
bit literal
bit literal
<bit string literal> ::= B <quote> [ <bit> ... ] <quote> [ { <separator> <quote>
[ <bit>... ] <quote> }... ]
<bit> ::= 0 | 1
The type of a binary literal is BIT. The bit length of the bit literal is the length of the type. Digits 0 and 1 are used
to represent the bits. Long literals can be divided into multiple quoted strings, separated with a space or end-of-line
character.
B'10001001' '00010'
numeric literal
numeric literal
<signed numeric literal> ::= [ <sign> ] <unsigned numeric literal>
<unsigned numeric literal> ::= <exact numeric literal> | <approximate numeric
literal>
<exact numeric literal> ::= <unsigned integer> [ <period> [ <unsigned integer> ] ]
| <period> <unsigned integer>
<sign> ::= <plus sign> | <minus sign>
<approximate numeric literal> ::= <mantissa> E <exponent>
<mantissa> ::= <exact numeric literal>
<exponent> ::= <signed integer>
<signed integer> ::= [ <sign> ] <unsigned integer>
108
boolean literal
boolean literal
<boolean literal> ::= TRUE | FALSE | UNKNOWN
The boolean literal is one of the specified keywords.
datetime and interval literal
datetime and interval literal
<datetime literal> ::= <date literal> | <time literal> | <timestamp literal>
<date literal> ::= DATE <date string>
<time literal> ::= TIME <time string>
<timestamp literal> ::= TIMESTAMP <timestamp string>
<date string> ::= <quote> <unquoted date string> <quote>
<time string> ::= <quote> <unquoted time string> <quote>
<timestamp string> ::= <quote> <unquoted timestamp string> <quote>
<time zone interval> ::= <sign> <hours value> <colon> <minutes value>
<date value> ::= <years value> <minus sign> <months value> <minus sign> <days
value>
<time value> ::= <hours value> <colon> <minutes value> <colon> <seconds value>
<interval literal> ::= INTERVAL [ <sign> ] <interval string> <interval qualifier>
<interval string> ::= <quote> <unquoted interval string> <quote>
<unquoted date string> ::= <date value>
<unquoted time string> ::= <time value> [ <time zone interval> ]
<unquoted timestamp string> ::= <unquoted date string> <space> <unquoted time
string>
109
References, etc.
References are identifier chains, which can be a single identifiers or identifiers chains composed of single identifiers
chained together with the period symbol.
identifier chain
identifier chain
<identifier chain> ::= <identifier> [ { <period> <identifier> }... ]
<basic identifier chain> ::= <identifier chain>
A period-separated chain of identifiers. The identifiers in an identifier chain can refer to database objects in a hierarchy.
The possible hierarchies are as follows. In each hierarchy, elements from the start or the end can be missing, but the
order of elements cannot be changed.
110
column reference
column reference
<column reference> ::= <basic identifier chain> | MODULE <period> <qualified
identifier> <period> <column name>
Reference a column or a routine variable.
SQL parameter reference
SQL parameter reference
<SQL parameter reference> ::= <basic identifier chain>
Reference an SQL routine parameter.
contextually typed value specification
contextually typed value specification
<contextually typed value specification> ::= <null specification> | <default
specification>
<null specification> ::= NULL
<default specification> ::= DEFAULT
Specify a value whose data type or value is inferred from its context.
DEFAULT is used for assignments to table columns that have a default value, or to table columns that are generated
either as an IDENTITY value or as an expression.
NULL can be used only in a context where the type of the value is known. For example, a NULL can be assigned to
a column of the table in an INSERT or UPDATE statement, because the type of the column is known. But if NULL
is used in a SELECT list, it must be used in a CAST statement.
Value Expression
Value expression is a general name for all expressions that return a value. Different types of expressions are allowed
in different contexts.
value expression primary
value expression primary
111
<value
expression
primary>
::=
<parenthesized
<nonparenthesized value expression primary>
value
expression>
specification>
::=
<unsigned
literal>
<general
value
specification>
::=
<host
parameter
name>
<indicator
112
CASE
case specification
<case specification> ::= <simple case> | <searched case>
<simple case> ::= CASE <case operand> <simple when clause>... [ <else clause> ]
END
<searched case> ::= CASE <searched when clause>... [ <else clause> ] END
<simple when clause> ::= WHEN <when operand list> THEN <result>
<searched when clause> ::= WHEN <search condition> THEN <result>
<else clause> ::= ELSE <result>
<case operand> ::= <row value predicand> | <overlaps predicate part 1>
<when operand list> ::= <when operand> [ { <comma> <when operand> }... ]
113
<when operand> ::= <row value predicand> | <comparison predicate part 2> |
<between predicate part 2> | <in predicate part 2> | <character like predicate
part 2> | <octet like predicate part 2> | <similar predicate part 2> | <regex like
predicate part 2> | <null predicate part 2> | <quantified comparison predicate
part 2> | <match predicate part 2> | <overlaps predicate part 2> | <distinct
predicate part 2>
<result> ::= <result expression> | NULL
<result expression> ::= <value expression>
Specify a conditional value. The result of a case expression is always a value. All the values introduced with THEN
must be of the same type or convertible to the same type.
Some simple examples of the CASE expression are given below. The first two examples return 'Britain', 'Germany',
or 'Other country' depending on the value of dialcode. The third example uses IN and smaller-than predicates.
CASE
CASE
dial
CASE
'bad
dialcode WHEN 44 THEN 'Britain' WHEN 49 THEN 'Germany' ELSE 'Other country' END
WHEN dialcode=44 THEN 'Britain' WHEN dialcode=49 THEN 'Germany' WHEN dialcode < 0 THEN 'bad
code' ELSE 'Other country' END
dialcode WHEN IN (44, 49,30) THEN 'Europe' WHEN IN (86,91,91) THEN 'Asia' WHEN < 0 THEN
dial code' ELSE 'Other continent' END
The case statement can be far more complex and involve several conditions.
CAST
cast specification
<cast specification> ::= CAST <left paren> <cast operand> AS <cast target>
<right paren>
<cast operand> ::= <value expression> | <implicitly typed value specification>
<cast target> ::= <domain name> | <data type>
Specify a data conversion. Data conversion takes place automatically among variants of a general type. For example
numeric values are freely converted from one type to another in expressions.
Explicit type conversion is necessary in two cases. One case is to determine the type of a NULL value. The other
case is to force conversion for special purposes. Values of data types can be cast to a character type. The exception
is BINARY and OTHER types. The result of the cast is the literal expression of the value. Conversely, a value of
a character type can be converted to another type if the character value is a literal representation of the value in the
target type. Special conversions are possible between numeric and interval types, which are described in the section
covering interval types.
The examples below show examples of cast with their result:
CAST
CAST
CAST
CAST
CAST
(NULL AS TIMESTAMP)
('
199 ' AS INTEGER) = 199
('tRue ' AS BOOLEAN) = TRUE
(INTERVAL '2' DAY AS INTEGER) = 2
('1992-04-21' AS DATE) = DATE '1992-04-21'
114
Return the next value of a sequence generator. This expression can be used as a select list element in queries, or in
assignments to table columns in data change statements. If the expression is used more than once in a single row that
is being evaluated, the same value is returned for each invocation. After evaluation of the particular row is complete,
the sequence generator will return a different value from the old value. The new value is generated by the sequence
generator by adding the increment to the last value it generated. In SQL syntax compatibility modes, variants of this
expression in different SQL dialects are supported. In the example below the expression is used in an insert statement:
INSERT INTO MYTABLE(COL1, COL2) VALUES 2, NEXT VALUE FOR MYSEQUENCE
value expression
value expression
<value expression> ::= <numeric value expression> | <string value expression>
| <datetime value expression> | <interval value expression> | <boolean value
expression> | <row value expression>
An expression that returns a value. The value can be a single value, or a row consisting more than one value.
numeric value expression
numeric value expression
<numeric value expression> ::= <term> | <numeric value expression> <plus sign>
<term> | <numeric value expression> <minus sign> <term>
<term> ::= <factor> | <term> <asterisk> <factor> | <term> <solidus> <factor>
<factor> ::= [ <sign> ] <numeric primary>
<numeric primary> ::= <value expression primary> | <numeric value function>
Specify a numeric value. The BNF indicates that <asterisk> and <solidus> (the operators for multiplication
and division) have precedence over <minus sign> and <plus sign>.
numeric value function
numeric value function
<numeric value function> ::= <position expression> | <extract expression> |
<length expression> ...
Specify a function yielding a value of type numeric. The supported numeric value functions are listed and described
in the Built In Functions chapter.
115
116
function>
::=
ABS
<left
paren>
<interval
value
Specify a function that returns the absolute value of an interval. If the interval is negative, it is negated, otherwise
the original value is returned.
boolean value expression
boolean value expression
<boolean value expression> ::= <boolean term> | <boolean value expression> OR
<boolean term>
<boolean term> ::= <boolean factor> | <boolean term> AND <boolean factor>
<boolean factor> ::= [ NOT ] <boolean test>
<boolean test> ::= <boolean primary> [ IS [ NOT ] <truth value> ]
<truth value> ::= TRUE | FALSE | UNKNOWN
<boolean primary> ::= <predicate> | <boolean predicand>
<boolean
predicand>
::=
<parenthesized
<nonparenthesized value expression primary>
<parenthesized boolean value
expression> <right paren>
expression>
117
boolean
::=
<left
value
expression>
paren>
<boolean
value
Predicates
Predicates are conditions with two sides and evaluate to a boolean value. The left side of the predicate, the <row
value predicand>, is the common element of all predicates. This element is a generalisation of both <value
expression>, which is a scalar, and of <explicit row value constructor>, which is a row. The two
sides of a predicate can be split in CASE statements where the <row value predicand> is part of multiple
predicates.
The number of fields in all <row value predicand> used in predicates must be the same and the types of the
fields in the same position must be compatible for comparison. If either of these conditions does not hold, an exception
is raised. The number of fields in a row is called the degree.
In many types of predicates (but not all of them), if the <row value predicand> evaluates to NULL, the result
of the predicate is UNKNOWN. If the <row value predicand> has more than one element, and one or more
of the fields evaluate to NULL, the result depends on the particular predicate.
comparison predicand
comparison predicate
<comparison predicate> ::= <row value predicand> <comp op> <row value predicand>
<comp op> ::= <equals operator> | <not equals operator> | <less than operator>
| <greater than operator> | <less than or equals operator> | <greater than or
equals operator>
Specify a comparison of two row values. If either <row value predicand> evaluates to NULL, the result of
<comparison predicate> is UNKNOWN. Otherwise, the result is TRUE, FALSE or UNKNOWN.
If the degree of <row value predicand> is larger than one, comparison is performed between each field and
the corresponding field in the other <row value predicand> from left to right, one by one.
When comparing two elements, if either field is NULL then the result is UNKNOWN.
For <equals operator>, if the result of comparison is TRUE for all field, the result of the predicate is TRUE. If
the result of comparison is FALSE for one field, the result of predicate is FALSE. Otherwise the result is UNKNOWN.
The <not equals operator> is translated to NOT (<row value predicand> = <row value
predicand>).
The <less than or equals operator> is translated to (<row value predicand> = <row value
predicand>) OR (<row value predicand> < <row value predicand>). The <greater than
or equals operator> is translated similarly.
For the <less than operator> and <greater than operator>, if two fields at a given position are
equal, then comparison continues to the next field. Otherwise, the result of the last performed comparison is returned
as the result of the predicate. This means that if the first field is NULL, the result is always UNKNOWN.
The logic that governs NULL values and UNKNOWN result is as follows: Suppose the NULL values were substituted
by arbitrary real values. If substitution cannot change the result of the predicate, then the result is TRUE or FALSE,
based on the existing non-NULL values, otherwise the result of the predicate is UNKNOWN.
The examples of comparison given below use literals, but the literals actually represent the result of evaluation of
some expression.
118
((1, 2, 3,
((1, 2, 3,
((1, 2, 3,
((1, 2, 3,
((NULL, 1,
((NULL, 1,
((NULL, 1,
((NULL, 1,
((1, NULL,
((1, NULL,
((2, NULL,
BETWEEN
between predicate
<between predicate> ::= <row value predicand> <between predicate part 2>
<between predicate part 2> ::= [ NOT ] BETWEEN [ ASYMMETRIC | SYMMETRIC ] <row
value predicand> AND <row value predicand>
Specify a range comparison. The default is ASYMMETRIC. The expression X BETWEEN Y AND Z is equivalent to
(X >= Y AND X <= Z). Therefore if Y > Z, the BETWEEN expression is never true. The expression X BETWEEN
SYMMETRIC Y AND Z is equivalent to (X >= Y AND X <= Z) OR (X >= Z AND X <= Y). The
expression Z NOT BETWEEN ... is equivalent to NOT (Z BETWEEN ...). If any of the three <row value
predicand> evaluates to NULL, the result is UNKNOWN.
IN
in predicate
<in predicate> ::= <row value predicand> [ NOT ] IN <in predicate value>
<in predicate value> ::= <table subquery> | <left paren> <in value list> <right
paren>
| <left paren> UNNEST <left paren> <array value expression> <right paren> <right
paren>
<in value list> ::=
expression> }... ]
<row
value
expression>
<comma>
<row
value
Specify a quantified comparison. The expression X NOT IN Y is equivalent to NOT (X IN Y). The ( <in
value list> ) is converted into a table with one or more rows. The expression X IN Y is equivalent to X =
ANY Y, which is a <quantified comparison predicate>.
If the <table subquery> returns no rows, the result is FALSE. Otherwise the <row value predicand> is
compared one by one with each row of the <table subquery>.
If the comparison is TRUE for at least one row, the result is TRUE. If the comparison is FALSE for all rows, the result
is FALSE. Otherwise the result is UNKNOWN.
HyperSQL supports an extension to the SQL Standard to allow an array to be used in the <in predicate value>. This is
intended to be used with prepared statements where a variable length array of values can be used as the parameter value
for each call. The example below shows how this is used in SQL. The JDBC code must create a new java.sql.Array
object that contains the values and set the parameter with this array.
SELECT * FROM customer WHERE firstname IN ( UNNEST(?) )
119
Connection conn;
PreparedStatement ps;
// conn and ps are instantiated here
Array arr = conn.createArrayOf("INTEGER", new Integer[] {1, 2, 3});
ps.setArray(1, arr);
ResultSet rs = ps.executeQuery();
LIKE
like predicate
<like predicate> ::= <character like predicate> | <octet like predicate>
<character like predicate> ::= <row value predicand> [ NOT ] LIKE <character
pattern> [ ESCAPE <escape character> ]
<character pattern> ::= <character value expression>
<escape character> ::= <character value expression>
<octet like predicate> ::= <row value predicand> [ NOT ] LIKE <octet pattern>
[ ESCAPE <escape octet> ]
<octet pattern> ::= <binary value expression>
<escape octet> ::= <binary value expression>
Specify a pattern-match comparison for character or binary strings. The <row value predicand> is always
a <string value expression> of character or binary type. The <character pattern> or <octet
pattern> is a <string value expression> in which the underscore and percent characters have special
meanings. The underscore means match any one character, while the percent means match a sequence of zero or more
characters. The <escape character> or <escape octet> is also a <string value expression> that
evaluates to a string of exactly one character length. If the underscore or the percent is required as normal characters
in the pattern, the specified <escape character> or <escape octet> can be used in the pattern before the
underscore or the percent. The <row value predicand> is compared with the <character pattern> and
the result of comparison is returned. If any of the expressions in the predicate evaluates to NULL, the result of the
predicate is UNKNOWN. The expression A NOT LIKE B is equivalent to NOT (A LIKE B). If the length of the
escape is not 1 or it is used in the pattern not immediately before an underscore or a percent character, an exception
is raised.
IS NULL
null predicate
<null predicate> ::= <row value predicand> IS [ NOT ] NULL
Specify a test for a null value. The expression X IS NOT NULL is NOT equivalent to NOT (X IS NULL)if the
degree of the <row value predicand> is larger than 1. The rules are: If all fields are null, X IS NULL is TRUE
and X IS NOT NULL is FALSE. If only some fields are null, both X IS NULL and X IS NOT NULL are FALSE.
If all fields are not null, X IS NULL is FALSE and X IS NOT NULL is TRUE.
ALL and ANY
quantified comparison predicate
<quantified comparison predicate>
<quantifier> <table subquery>
::=
120
<row
value
predicand>
<comp
op>
EXISTS
exists predicate
<exists predicate> ::= EXISTS <table subquery>
Specify a test for a non-empty set. If the evaluation of <table subquery> results in one or more rows, then the
expression is TRUE, otherwise FALSE.
UNIQUE
unique predicate
<unique predicate> ::= UNIQUE <table subquery>
Specify a test for the absence of duplicate rows. The result of the test is either TRUE or FALSE (never UNKNOWN).
The rows of the <table subquery> that contain one or more NULL values are not considered for this test. If the
rest of the rows are distinct from each other, the result of the test is TRUE, otherwise it is FALSE. The distinctness of
rows X and Y is tested with the predicate X IS DISTINCT FROM Y.
MATCH
match predicate
<match predicate> ::= <row value predicand> MATCH [ UNIQUE ] [ SIMPLE | PARTIAL
| FULL ] <table subquery>
Specify a test for matching rows. The default is MATCH SIMPLE without UNIQUE. The result of the test is either
TRUE or FALSE (never UNKNOWN).
121
The interpretation of NULL values is different from other predicates and quite counter-intuitive. If the <row value
predicand> is NULL, or all of its fields are NULL, the result is TRUE.
Otherwise, the <row value predicand> is compared with each row of the <table subquery>.
If SIMPLE is specified, if some field of <row value predicate> is NULL, the result is TRUE. Otherwise if
<row value predicate> is equal to one or more rows of <table subquery> the result is TRUE if UNIQUE
is not specified, or if UNIQUE is specified and only one row matches. Otherwise the result is FALSE.
If PARTIAL is specified, if the non-null values <row value predicate> are equal to those in one or more
rows of <table subquery> the result is TRUE if UNIQUE is not specified, or if UNIQUE is specified and only
one row matches. Otherwise the result is FALSE.
If FULL is specified, if some field of <row value predicate> is NULL, the result is FALSE. Otherwise if
<row value predicate> is equal to one or more rows of <table subquery> the result is TRUE if UNIQUE
is not specified, or if UNIQUE is specified and only one row matches.
Note that MATCH can also used be used in FOREIGN KEY constraint definitions. The exact meaning is described
in the Schemas and Database Objects chapter.
CONTAINS
contains predicate
<contains predicate> ::= PERIOD <row value predicand> CONTAINS PERIOD <row value
predicand>
Specify a test for two datetime periods. Each <row value predicand> must have two fields and the fields
together represent a datetime period. So the predicates is always in the form PERIOD (X1, X2) CONTAINS
PERIOD (Y1, Y2). The first field in each period is always a datetime value, while the second field is either a
datetime value or a positive interval value.
If the second field in a period is an interval value, it is replaced with the sum of the datetime value and itself, for
example PERIOD(X1, X1 + X2) OVERLAPS PERIOD (Y1, Y1 + Y 2).
All datetime values are converted to TIMESTAMP WITH TIME ZONE. The second datetime value must be after the
first, otherwise a data error is returned.
If the second period is fully within the first period, the result is TRUE. Otherwise it is false.
If any of the values is NULL, the result is UNDEFINED.
EQUALS
equals predicate
<equals predicate> ::= PERIOD <row value predicand> EQUALS PERIOD <row value
predicand>
Specify a test for two datetime periods. The conversions and checks are applied the same way as with the CONTAINS
predicate. If the two period have the same begin and end datatime values the result is TRUE. Otherwise it is false.
If any of the values is NULL, the result is UNDEFINED.
OVERLAPS
overlaps predicate
122
<overlaps predicate> ::= <row value predicand> OVERLAPS <row value predicand>
<overlaps predicate> ::= PERIOD <row value predicand> OVERLAPS PERIOD <row value
predicand>
The OVERLAPS predicate tests for an overlap between two datetime periods. This predicate has two forms. The one
without the PERIOD keywords is more relaxed in terms of valid periods.
If there is there is any overlap between the two datetime periods, the result is TRUE. Otherwise it is false.
If any of the values is NULL, the result is UNDEFINED.
In the example below, the period is compared with a week long period ending yesterday.
(startdate, enddate) OVERLAPS (CURRENT_DATE - 7 DAY, CURRENT_DATE - 1 DAY)
PRECEDES
precedes predicate
<precedes predicate> ::= PERIOD <row value predicand> [ IMMEDIATELY] PRECEDES
PERIOD <row value predicand>
Specify a test for two datetime periods. The conversions and checks are applied the same way as with the CONTAINS
predicate. If the second period begins after the end of the first period, the result is TRUE. Otherwise it is false.
If IMMEDIATELY is specified, the second period must follow immediately after the end of the first period. This
means the end of the first period is the same point of time as the start of the second period.
If any of the values is NULL, the result is UNDEFINED.
SUCCEEDS
succeeds predicate
<succeeds predicate> ::= PERIOD <row value predicand> [ IMMEDIATELY ] SUCCEEDS
PEDIOD <row value predicand>
Specify a test for two datetime periods with similar syntax to PRECEDES. If the first period begins after the end of
the second period, the result is TRUE. Otherwise it is false.
If IMMEDIATELY is specified, the first period must follow immediately after the end of the second period.
If any of the values is NULL, the result is UNKNOWN.
The example below shows a predicate that returns TRUE.
PERIOD (CURRENT_DATE - 7 DAY, CURRENT_DATE) IMMEDIATELY PRECEDES (CURRENT_DATE, CURRENT_DATE + 7
DAY)
IS DISTINCT
is distinct predicate
<distinct predicate> ::= <row value predicand> IS [ NOT ] DISTINCT FROM <row
value predicand>
123
Specify a test of whether two row values are distinct. The result of the test is either TRUE or FALSE (never
UNKNOWN). The degree the two <row value predicand> must be the same. Each field of the first <row
value predicand> is compared to the field of the second <row value predicand> at the same position.
If one field is NULL and the other is not NULL, or if the elements are NOT equal, then the result of the expression
is TRUE. If no comparison result is TRUE, then the result of the predicate is FALSE. The expression X IS NOT
DISTINCT FROM Y is equivalent to NOT (X IS DISTINCT FORM Y). The following check returns true if
startdate is not equal to enddate. It also returns true if either startdate or enddate is NULL. It returns false in other cases.
startdate IS DISTINCT FROM enddate
Aggregate Functions
aggregate function
aggregate function
<aggregate function> ::= COUNT <left paren> <asterisk> <right paren> [ <filter
clause> ] | <general set function> [ <filter clause> ] | <array aggregate
function> [ <filter clause> ]
<general set function> ::= <set function type> <left paren> [ <set quantifier> ]
<value expression> <right paren>
<set function type> ::= <computational operation>
<computational operation> ::= AVG | MAX | MIN | SUM | EVERY | ANY | SOME | COUNT
| STDDEV_POP | STDDEV_SAMP | VAR_SAMP | VAR_POP | MEDIAN
<set quantifier> ::= DISTINCT | ALL
<filter clause> ::= FILTER <left paren> WHERE <search condition> <right paren>
<array aggregate function> ::= ARRAY_AGG <left paren> [ <set quantifier> ]
<value expression> [ <order by clause> ] <right paren>
<group concat function> ::= GROUP_CONCAT <left paren> [ <set quantifier> ]
<value expression> [ <order by clause> ] [ SEPARATOR <separator> ] <right paren>
<separator> ::= <character string literal>
Specify a value computed from a collection of rows.
An aggregate function is used exclusively in a <query specification> and its use transforms a normal query
into an aggregate query returning a single row instead of the multiple rows that the original query returns. For example,
SELECT acolumn <table expression> is a query that returns the value of acolumn for all the rows the satisfy
the given condition. But SELECT MAX(acolumn) <table expression> returns only one row, containing
the largest value in that column. The query SELECT COUNT(*) <table expression> returns the count of
rows, while SELECT COUNT(acolumn) <table expression> returns the count of rows where acolumn
IS NOT NULL.
If the <table expression> is a grouped table (has a GROUP BY clause), the aggregate function returns the result
of the COUNT or <computational operation> for each group. In this case the result has the same number
of rows as the original grouped query. For example SELECT SUM(acolumn) <table expression> when
<table expression> has a GROUP BY clause, returns the sum of values for acolumn in each group.
If all values are NULL, the aggregate function (except COUNT) returns NULL.
124
The SUM operations can be performed on numeric and interval expressions only. AVG and MEDIAN can be performed
on numeric, interval or datetime expressions. AVG returns the average value, while SUM returns the sum of all values.
MEDIAN returns the middle value in the sorted list of values.
MAX and MIN can be performed on all types of expressions and return the minimum or the maximum value.
COUNT(*) returns the count of all values, including nulls, while COUNT(<value expression>) returns the
count of non-NULL values. COUNT with DISTINCT also accepts multiple arguments. In this usage the distinct
combinations of the arguments are counted. Examples below:
SELECT COUNT(DISTINCT firstname, lastname) FROM customer
SELECT COUNT(DISTINCT (firstname, lastname)) FROM customer
The EVERY, ANY and SOME operations can be performed on boolean expressions only. EVERY returns TRUE if
all the values are TRUE, otherwise FALSE. ANY and SOME are the same operation and return TRUE if one of the
values is TRUE, otherwise it returns FALSE.
The other operations perform the statistical functions STDDEV_POP, STDDEV_SAMP, VAR_SAMP, VAR_POP
on numeric values. NULL values are ignored in calculations.
User-defined aggregate functions can be defined and used instead of the built-in aggregate functions. Syntax and
examples are given in the SQL-Invoked Routines chapter.
The <filter clause> allows you to add a search condition. When the search condition evaluates to TRUE for
a row, the row is included in aggregation. Otherwise the row is not included. In the example below a single query
returns two different filtered counts:
SELECT COUNT(ITEM) FILTER (WHERE GENDER = 'F') AS "FEMALE COUNT", COUNT(ITEM) FILTER (WHERE
GENDER = 'M') AS "MALE COUNT" FROM PEOPLE
ARRAY_AGG is different from all other aggregate functions, as it does not ignore the NULL values. This set function
returns an array that contains all the values, for different rows, for the <value expression>. For example, if the
<value expression> is a column reference, the SUM function adds the values for all the row together, while the
ARRAY_AGG function adds the value for each row as a separate element of the array. ARRAY_AGG can include an
optional <order by clause>. If this is used, the elements of the returned array are sorted according to the <order
by clause>, which can reference all the available columns of the query, not just the <value expression>
that is used as the ARRAY_AGG argument. The <order by clause> can have multiple elements (columns) and
each element can include NULLS LAST or DESC qualifiers. No <separator> is used with this function.
GROUP_CONCAT is a specialised function derived from ARRAY_AGG. This function computes the array in the
same way as ARRAY_AGG, removes all the NULL elements, then returns a string that is a concatenation of the
elements of the array. If <separator> has been specified, it is used to separate the elements of the array. Otherwise
the comma is used to separate the elements.
The example below shows a grouped query with ARRAY_AGG and GROUP_CONCAT. The CUSTOMER table that
is included for tests in the DatabaseManager GUI app is the source of the data.
SELECT LASTNAME, ARRAY_AGG(FIRSTNAME ORDER BY FIRSTNAME) FROM Customer GROUP BY LASTNAME
LASTNAME
--------Steel
King
Sommer
C2
---------------------------------------------------------ARRAY['John','John','Laura','Robert']
ARRAY['George','George','James','Julia','Robert','Robert']
ARRAY['Janet','Robert']
SELECT LASTNAME, GROUP_CONCAT(DISTINCT FIRSTNAME ORDER BY FIRSTNAME DESC SEPARATOR ' * ') FROM
Customer GROUP BY LASTNAME
LASTNAME
C2
125
--------Steel
King
Sommer
<SQL
argument>
<comma>
<SQL
126
127
The next sections discuss various types of tables and operations involved in data access statements.
Select Statement
The SELECT statement itself does not cover all types of data access statements, which may combine multiple SELECT
statements. The <query specification> is the most common data access statement and begins with the
SELECT keyword.
SELECT STATEMENT
select statement (general)
Users generally refer to the SELECT statement when they mean a <query specification> or <query
expression>. If a statement begins with SELECT and has no UNION or other set operations, then it is a <query
specification>. Otherwise it is a <query expression>.
Table
In data access statements, a table can be a database table (or view) or an ephemeral table formed for the duration of the
query. Some types of table are <table primary> and can participate in joins without the use of extra parentheses.
The BNF in the Table Primary section below lists different types of <table primary>:
Tables can also be formed by specifying the values that are contained in them:
<table value constructor> ::= VALUES <row value expression list>
<row value expression list> ::= <table row value expression> [ { <comma> <table
row value expression> }... ]
In the example below a table with two rows and 3 columns is constructed out of some values:
VALUES (12, 14, null), (10, 11, CURRENT_DATE)
When a table is used directly in a UNION or similar operation, the keyword TABLE is used with the name:
<explicit table> ::= TABLE <table or query name>
In the examples below, all rows of the two tables are included in the union. The keyword TABLE is used in the first
example. The two examples below are equivalent.
TABLE atable UNION TABLE anothertable
SELECT * FROM atable UNION SELECT * FROM anothertable
Subquery
A subquery is simply a query expression in brackets. A query expression is usually a complete SELECT statement and
is discussed in the rest of this chapter. A scalar subquery returns one row with one column. A row subquery returns one
row with one or more columns. A table subquery returns zero or more rows with one or more columns. The distinction
between different forms of subquery is syntactic. Different forms are allowed in different contexts. If a scalar subquery
128
or a row subquery return more than one row, an exception is raised. If a scalar or row subquery returns no row, it is
usually treated as returning a NULL. Depending on the context, this has different consequences.
<scalar subquery> ::= <subquery>
<row subquery> ::= <subquery>
<table subquery> ::= <subquery>
<subquery> ::= <left paren> <query expression> <right paren>
Query Specification
A query specification is also known as a SELECT statement. It is the most common form of <derived table>
. A <table expression> is a base table, a view or any form of allowed derived table. The SELECT statement
performs projection, naming, computing or aggregation on the rows of the <table expression> .
<query specification> ::= SELECT [ DISTINCT | ALL ] <select list> <table
expression>
<select list> ::=
sublist> }... ]
<asterisk>
<select
sublist>
<comma>
<select
::=
<asterisked
identifier>
<period>
Table Expression
A table expression is part of the SELECT statement and consists of the FROM clause with optional other clauses that
performs selection (of rows) and grouping from the table(s) in the FROM clause.
129
Derived Table
derived table
A query expression that is enclosed in parentheses and returns from zero to many rows is a <table subquery>.
In a <derived table> the query expression is self contained and cannot reference the columns of other table
references. This is the traditional and most common form of use of a <table subquery>.
<derived table> ::= <table subquery>
130
Lateral
LATERAL
When the word LATERAL is used before a <table subquery>, it means the query expression can reference the columns
of other table references that go before it.
<lateral derived table> ::= LATERAL <table subquery>
The use of <lateral derived table> completely transforms the way a query is written. For example, the two
queries below are equivalent, but with different forms. The query with LATERAL is evaluated separately for each row
of the first table that satisfies the WHERE condition. The example below uses the tables that are created and populated
in DatabaseManagerSwing with the "Insert test data" menu option. The first query uses a scalar subquery to compute
the sum of invoice values for each customer. The second query is equivalent and uses a join with a LATERAL table.
SELECT firstname, lastname, (SELECT SUM(total) FROM invoice WHERE customerid = customer.id) s
FROM customer
SELECT firstname, lastname, a.c FROM customer, LATERAL(SELECT SUM(total) FROM invoice WHERE
customerid = customer.id) a (c)
UNNEST
UNNEST
UNNEST is similar to LATERAL, but instead of a query expression, one or more expressions that return an array
are used. These expressions are converted into a table which has one column for each expression and contains the
elements of the array. If WITH ORDINALITY is used, an extra column that contains the index of each element is
added to this table. The number of rows in the table equals the length of the largest arrays. The smaller arrays are
padded with NULL values. If an <array value expression> evaluates to NULL, an empty array is used in its place. The
array expression can contain references to any column of the table references preceding the current table reference.
<collection derived table> ::= UNNEST <left paren> <array value expression>, ...
<right paren> [ WITH ORDINALITY ]
The <array value expression> can be the result of a function call. If the arguments of the function call are
values from the tables on the left of the UNNEST, then the function is called for each row of table.
In the first example below, UNNEST is used with the built in-function SEQUENCE_ARRAY to build a table
containing dates for the last seven days and their ordinal position. In the second example, a select statement returns
costs for the last seven days . In the third example, the WITH clause turns the two selects into named subqueries which
are used in a SELECT statement that uses a LEFT join.
SELECT * FROM UNNEST(SEQUENCE_ARRAY(CURRENT_DATE - 7 DAY, CURRENT_DATE - 1 DAY, 1 DAY)) WITH
ORDINALITY AS T(D, I)
D
---------2010-07-25
2010-07-26
2010-07-27
2010-07-28
2010-07-29
2010-07-30
2010-07-31
I
1
2
3
4
5
6
7
131
ITEM_DATE
---------2010-07-27
2010-07-29
S
-----100.12
50.45
WITH costs(i_d, s) AS (SELECT item_date, SUM(cost) AS s FROM expenses WHERE item_date >=
CURRENT_DATE - 7 DAY GROUP BY item_date),
dates(d, i) AS (SELECT * FROM UNNEST(SEQUENCE_ARRAY(CURRENT_DATE - 7 DAY, CURRENT_DATE - 1 DAY,
1 DAY)) WITH ORDINALITY)
SELECT i, d, s FROM dates LEFT OUTER JOIN costs ON dates.d = costs.i_d
I
1
2
3
4
5
6
7
D
---------2010-07-25
2010-07-26
2010-07-27
2010-07-28
2010-07-29
2010-07-30
2010-07-31
S
-----(null)
(null)
100.12
(null)
50.45
(null)
(null)
::=
TABLE
<left
paren>
<collection
value
Joined Table
Joins are operators with two table as the operands, resulting in a third table, called joined table. All join operators are
evaluated left to right, therefore, with multiple joins, the table resulting from the first join operator becomes an operand
of the next join operator. Parentheses can be used to group sequences of joined tables and change the evaluation order.
So if more than two tables are joined together with join operators, the end result is also a joined table. There are
different types of join, each producing the result table in a different way.
132
133
Joins with USING or NATURAL keywords joins are similar to an equijoin but they cannot be replaced simply with
an equijoin. The new table is formed with the specified or implied shared columns of the two tables, followed by the
rest of the columns from each table. In NATURAL JOIN, the shared columns are all the column pairs that have the
same name in the first and second table. In JOIN USING, only columns names that are specified by the USING clause
are shared. The joins are expressed as A NATURAL JOIN B, and A JOIN B USING (<comma separated
column name list>).
The columns of the joined table are formed by the following procedures: In JOIN ... USING the shared columns are
added to the joined table in the same order as they appear in the column name list. In NATURAL JOIN the shared
columns are added to the joined table in the same order as they appear in the first table. In both forms of join, the nonshared columns of the first table are added in the order they appear in the first table, finally the non-shared columns
of the second table are added in the order they appear in the second table.
The type of each shared column of the joined table is based on the type of the columns in the original tables. If the
original types are not exactly the same, the type of the shared column is formed by type aggregation. Type aggregations
selects a type that can represent values of both aggregated types. Simple type aggregation picks one of the types.
For example SMALLINT and INTEGER, results in INTEGER, or VARCHAR(10) and VARCHAR(20) results in
VARCHAR(20). More complex type aggregation inherits properties from both types. For example DECIMAL(8) and
DECIMAL (6,2) results in DECIMAL (8,2).
In the examples below, the rows are joined exactly the same way, but the first query contains a.col_two and b.col_two
together with all the rest of the columns of both tables, while the second query returns only one copy of col_two.
SELECT * FROM a INNER JOIN b ON a.col_two = b.col_two
SELECT * FROM a INNER JOIN b USING (col_two)
OUTER JOIN
LEFT, RIGHT and FULL OUTER JOIN
The three qualifiers can be added to all types of JOIN except CROSS JOIN and UNION JOIN. First the new table is
populated with the rows from the original join. If LEFT is specified, all the rows from the first table that did not make
it into the new table are extended to the right with nulls and added to the table. If RIGHT is specified, all the rows
from the second table that did not make it into the new table are extended to the left with nulls and added to the table.
If FULL is specified, the addition of leftover rows is performed from both the first and the second table. These forms
are expressed by prefixing the join specification with the given keyword. For example A LEFT OUTER JOIN B
ON (<search condition>) or A NATURAL FULL OUTER JOIN B or A FULL OUTER JOIN B USING
(<comma separated column name list>).
SELECT a.*, b.* FROM a LEFT OUTER JOIN b ON a.col_one = b.col_two
Selection
Despite the name, selection has nothing to do with the list of columns in a SELECT statement. In fact, it refers to
the search condition used to limit the rows that from a result table (selection of rows, not columns). In SQL, simple
selection is expressed with a WHERE condition appended to a single table or a joined table. In some cases, this method
of selection is the only method available. For example in DELETE and UPDATE statements. But when it is possible
to perform the selection with join conditions, this is the better method, as it results in a clearer expression of the query.
Projection
Projection is selection of the columns from a simple or joined table to form a result table. Explicit projection is
performed in the SELECT statement by specifying the select column list. Some form of projection is also performed
in JOIN ... USING and NATURAL JOIN.
134
The joined table has columns that are formed according to the rules mentioned above. But in many cases, not all the
columns are necessary for the intended operation. If the statement is in the form, SELECT * FROM <joined table>,
then all the columns of <joined table> are returned. But normally, the columns to be returned are specified after the
SELECT keyword, separated from each other with commas.
Computed Columns
In the select list, it is possible to use expressions that reference any columns of <joined table>. Each of these expressions
forms a computed column. It is computed for each row of the result table, using the values of the columns of the
<joined table> for that row.
Naming
Naming is used to hide the original names of tables or table columns and to replace them with new names in the scope
of the query. Naming is also used for defining names for computed columns.
Without explicit naming, the name of a column is a predefined name. If the column is a column of a table, or is a named
parameter, the name is of the table column or parameter is used. Otherwise it is generated by the database engine.
HyperSQL generates column names such as C1, C2, etc. As generated naming is implementation defined according
to the SQL Standard, it is better to explicitly name the computed and derived columns in your applications.
Naming in Joined Table
Naming is performed by adding a new name after a table's real name and by adding a list of column names after the
new table name. Both table naming and column naming are optional, but table naming is required for column naming.
The expression A [AS] X (<comma separated column name list>) means table A is used in the
query expression as table X and its columns are named as in the given list. The original name A, or its original column
names, are not visible in the scope of the query. The BNF is given below. The <correlation name> can be the
same or different from the name of the table. The <derived column list> is a comma separated list of column
names. The degree of this list must be equal to the degree of the table. The column names in the list must be distinct.
They can be the same or different from the names of the table's columns.
<table or query name> [ [ AS ] <correlation name> [ <left paren> <derived column
list> <right paren> ] ]
In the examples below, the columns of the original tables are named (a, b, c, d, e, f). The two queries are equivalent.
In the second query, the table and its columns are renamed and the new names are used in the WHERE clauses:
CREATE TABLE atable (a INT, b INT, c INT, d INT, e INT, f INT);
SELECT d, e, f FROM atable WHERE a + b = c
SELECT x, y, z FROM atable AS t (u, v, w, x, y, z) WHERE u + v = w
WHERE u + v = w ORDER
If the names xysum or yzsum are not used, the computed columns cannot be referenced in the ORDER BY list.
135
Name Resolution
In a joined table, if a column name appears in tables on both sides then any reference to the name must use the table
name in order to specify which table is being referred to.
Grouping Operations
Grouping Operations
Grouping results in the elimination of duplicate rows. A grouping operation is performed after the operations discussed
above. A simple form of grouping is performed by the use of DISTINCT after SELECT. This eliminates all the
duplicate rows (rows that have the same value in each of their columns when compared to another row). The other
form of grouping is performed with the GROUP BY clause. This form is usually used together with aggregation.
GROUP BY
<group by clause> ::= GROUP BY <grouping element> [ { <comma> <grouping
element> }... ]
<grouping element> ::= <column reference> [ <collate clause> ]
The <group by clause> is a comma separated list of columns of the table in the <from clause> or expressions
based on the columns.
When a <group by clause> is used, only the columns used in the <group by clause> or expressions
used there, can be used in the <select list>, together with any <aggregate function> on other columns.
In other words, the column names or expressions listed in the GROUP BY cluase dictate what can be used in the
<select list>. After the rows of the table formed by the <from clause> and the <where clause> are
finalised, the grouping operation groups together the rows that have the same values in the columns of the <group
by clause>. Then any <aggregate function> in the <select list> is performed on each group, and
for each group, a row is formed that contains the values of the columns of the <group by clause> and the values
returned from each <aggregate function>.
When the type of <column reference> is character string, the <collate clause> can be used to specify
the collation used for grouping the rows. For example, a collation that is not case sensitive can be used, or a collation
for a different language than the original collation of the column.
In the first example below, a simple column reference is used in GROUP BY, while in the second example, an
expression is used.
CREATE TABLE atable (a INT, b INT, c INT, d INT, e INT, f INT);
SELECT d, e, f FROM atable WHERE a + b = c GROUP BY d, e, f
SELECT d + e, SUM(f) FROM atable WHERE a + b = c GROUP BY d + e HAVING SUM(f) > 2 AND d + e > 4
A <having clause> filters the rows of the table that is formed after applying the <group by clause> using
its search condition. The search condition must be an expression based on the expressions in the GROUP BY list or
the aggregate functions used.
DISTINCT
SELECT DISTINCT
When the keyword DISTINCT is used after SELECT, it works as a shortcut replacement for a simple GROUP BY
clause. The expressions in the SELECT list are used directly as the <group by clause>. The following examples
of SELECT DISTINCT and SELECT with GROUP BY are equivalent.
SELECT DISTINCT d, e + f FROM atable WHERE a + b = c
SELECT d, e + f FROM atable WHERE a + b = c GROUP BY d, e + f
136
Aggregation
Aggregation is an operation that computes a single value from the values of a column over several rows. The operation
is performed with an aggregate function. The simplest form of aggregation is counting, performed by the COUNT
function.
Other common aggregate functions return the maximum, minimum and average value among the values in different
rows. Aggregate functions were discussed earlier in this chapter.
Set Operations
Set and Multiset Operations
While join operations generally result in laterally expanded tables, SET and COLLECTION operations are performed
on two tables that have the same degree and result in a table of the same degree. The SET operations are UNION,
INTERSECT and EXCEPT (difference). When each of these operations is performed on two tables, the collection
of rows in each table and in the result is reduced to a set of rows, by eliminating duplicates. The set operations are
then performed on the two tables, resulting in the new table which itself is a set of rows. Collection operations are
similar but the tables are not reduced to sets before or after the operation and the result is not necessarily a set, but
a collection of rows.
The set operations on two tables A and B are: A UNION [DISTINCT] B, A INTERSECT [DISTINCT] B and A
EXCEPT [DISTINCT] B. The result table is formed in the following way: The UNION operation adds all the rows
from A and B into the new table, but avoids copying duplicate rows. The INTERSECT operation copies only those
rows from each table that also exist in the other table, but avoids copying duplicate rows. The EXCEPT operation
copies those rows from the first table which do not exist in the second table, but avoids copying duplicate rows.
The collection operations are similar to the set operations, but can return duplicate rows. They are A UNION ALL B,
A INTERSECT ALL B and A EXCEPT ALL B. The UNION ALL operation adds all the rows from A and B into
the new table. The INTERSECT operation copies only those rows from each table that also exist in the other table. If
n copies of a rows exists in one table, and m copies in the other table, the number of copies in the result table is the
smaller of n and m. The EXCEPT operation copies those rows from the first table which do not exist in the second
table. If n copies of a row exist in the first table and m copies in the second table the number of copies in the result
table is n-m, or if n < m, then zero.
137
The RECURSIVE keyword changes the way the elements of the <with list> are interpreted. The <query
expression> contained in the <with list element> must be the UNION or UNION ALL of two <query
expression body> elements (simple VALUES or SELECT statements). The left element of the UNION is evaluated
first and its result becomes the result of the <with list element>. After this step, the current result of the
<with list element> is referenced in the right element (a SELECT statement) of the UNION, the UNION is performed
between the result and previous result of the <with list element>, which is enlarged by this operation. The
UNION operation is performed again and again, until the result of the <with list element> stops changing.
The result of the <with list element> is now complete and is later used in the execution of the <query
expression body>. When RECURSIVE is used, the <with column list> must be defined.
HyperSQL limits recursion to 265 rounds. If this is exceeded, an error is raised.
A trivial example of a recursive query is given below. Note the first column GEN. For example, if each row of the table
represents a member of a family of dogs, together with its parent, the first column of the result indicates the calculated
generation of each dog, ranging from first to fourth generation.
CREATE TABLE pptree (pid INT, id INT);
INSERT INTO pptree VALUES (NULL, 1) ,(1,2), (1,3),(2,4),(4,5),(3,6),(3,7);
WITH RECURSIVE tree (gen, par, child) AS (
VALUES(1, CAST(null as int), 1)
UNION
SELECT gen + 1, pid, id FROM pptree, tree WHERE pid = child
) SELECT * FROM tree;
GEN
--1
2
2
3
3
3
4
PAR
-----(null)
1
1
2
3
3
4
CHILD
----1
2
3
4
6
7
5
if recursive queries become complex they also become very difficult to develop and debug. HyperSQL provides an
alternative to this with user-defined SQL functions which return tables. Functions can perform any complex, repetitive
task with better control, using loops, variables and, if necessary, recursion.
Query Expression
A query expression consists of an optional WITH clause and a query expression body. The optional WITH clause lists
one or more named ephemeral tables that can be referenced, just like the database tables in the query expression body.
<query expression> ::= [ <with clause> ] <query expression body>
A query expression body refers to a table formed by using UNION and other set operations. The query expression
body is evaluated from left to right and the INTERSECT operator has precedence over the UNION and EXCEPT
operators. A simplified BNF is given below:
<query expression body> ::= <query term> | <query expression body> UNION |
EXCEPT [ ALL | DISTINCT ] [ <corresponding spec> ] <query term>
<query term> ::= <query primary> | <query term> INTERSECT [ ALL | DISTINCT ]
[ <corresponding spec> ] <query term>
<query primary> ::= <simple table> | <left paren> <query expression body> [ <order
by clause> ] [ <result offset clause> ] [ <fetch first clause> ] <right paren>
138
The type of each column of the query expression is determined by combining the types of the corresponding columns
from the two participating tables.
Ordering
When the rows of the result table have been formed, it is possible to specify the order in which they are returned to the
user. The ORDER BY clause is used to specify the columns used for ordering, and whether ascending or descending
ordering is used. It can also specify whether NULL values are returned first or last.
SELECT x + y AS xysum, y + z AS yzsum FROM atable AS t (u, v, w, x, y, z)
BY xysum NULLS LAST, yzsum NULLS FIRST
WHERE u + v = w ORDER
The ORDER BY clause specifies one or more <value expressions>. The list of rows is sorted according to the
first <value expression>. When some rows are sorted equal then they are sorted according to the next <value
expression> and so on.
<order by clause> ::=
specification> }... ]
ORDER
BY
<sort
specification>
<comma>
<sort
139
In the above example, if the LASTNAME column is itself defined in the table definition with COLLATE "English",
then the COLLATE clause is not necessary in the ORDER BY expression.
An ORDER BY operation can sometimes be optimised by the engine when it can use the same index for accessing the
table data and ordering. Optimisation can happen both with DESC + NULLS LAST and ASC + NULLS FIRST.
sort specification list
sort specification list
<sort specification list> ::= <value expression> [ASC | DESC] [NULLS FIRST |
NULLS LAST]
Specify a sort order. A sort operation is performed on the result of a <query expression> or <query
specification> and sorts the result according to one or more <value expression>. The <value
expression> is usually a single column of the result, but in some cases it can be a column of the <table
expression> that is not used in the select list. The default is ASC and NULLS FIRST.
Slicing
A different form of limiting the rows can be performed on the result table after it has been formed according to all the
other operations (selection, grouping, ordering etc.). This is specified by the FETCH ... ROWS and OFFSET clauses
of a SELECT statement. In this form, the specified OFFSET rows are removed from start of the table, then up to the
specified FETCH rows are kept and the rest of the rows are discarded.
<result offset clause> ::= OFFSET <offset row count> { ROW | ROWS }
<fetch first clause> ::= FETCH { FIRST | NEXT } [ <fetch first row count> ]
{ ROW | ROWS } ONLY [ USING INDEX ]
<limit clause> ::= LIMIT <fetch first row count> [ USING INDEX ]
A slicing operation takes the result set that has been already processed and ordered. It then discards the specified
number of rows from the start of the result set and returns the specified number of rows after the discarded rows. The
<offset row count> and <fetch first row count> can be constants, dynamic variables, routine parameters, or routine
variables. The type of the constants must be INTEGER.
SELECT a, b FROM atable WHERE d < 5 ORDER BY absum OFFSET 3 FETCH 2 ROWS ONLY
SELECT a, b FROM atable WHERE d < 5 ORDER BY absum OFFSET 3 LIMIT 2 /* alternative keyword */
When the FETCH keyword is used, the specified number of rows must be at least 1, otherwise an error is returned.
This behaviour is consistent with the SQL Standard. When the LIMIT keyword is used, the specified number of rows
can be zero, which means return all rows (no LIMIT). In MySQL and PostgreSQL syntax modes, zero limit means
no rows (empty result).
If there is an index on all the columns specified in the ORDER BY clause, it is normally used for slicing. In some
queries, an index on another column may take precedence because it is used to process the WHERE condition. It is
possible to add USING INDEX to the end of the slicing clause to force the use of the index for ordering and slicing,
instead of the index for the WHERE condition.
140
Truncate Statement
TRUNCATE TABLE
truncate table statement
<truncate table statement> ::= TRUNCATE TABLE <target table> [ <identity column
restart option> ] [ <truncate options> ]
<identity column restart option> ::= CONTINUE IDENTITY | RESTART IDENTITY
<truncate options> ::= AND COMMIT [ NO CHECK ]
Delete all rows of a table without firing its triggers. This statement can only be used on base tables (not views). If
the table is referenced in a FOREIGN KEY constraint defined on another table, the statement causes an exception.
Triggers defined on the table are not executed with this statement. The default for <identity column restart
option> is CONTINUE IDENTITY. This means no change to the IDENTITY sequence of the table. If RESTART
IDENTITY is specified, then the sequence is reset to its start value.
TRUNCATE is faster than ordinary DELETE. The TRUNCATE statement is an SQL Standard data change statement,
therefore it is performed under transaction control and can be rolled back if the connection is not in the auto-commit
mode.
HyperSQL also supports the optional AND COMMIT and NO CHECK options. If AND COMMIT is used, then
the transaction is committed with the execution of the TRUNCATE statement. The action cannot be rolled back. If
141
the additional NO CHECK option is also specified, then the TRUNCATE statement is executed even if the table is
referenced in a FOREIGN KEY constraint defined on another, non-empty table. This form of TRUNCATE is faster
than the default form and does not use much memory.
TRUNCATE SCHEMA
truncate schema statement
<truncate schema statement> ::= TRUNCATE SCHEMA <target schema> [ <identity
column restart option> ] AND COMMIT [ NO CHECK ]
Performs the equivalent of a TRUNCATE TABLE ... AND COMMIT on all the table in the schema. If the additional
NO CHECK option is also specified, then the TRUNCATE statement is executed even if any of the tables in the
schema is referenced in a FOREIGN KEY constraint defined on a non-empty table in a different schema.
If RESTART IDENTITY is specified, all table IDENTITY sequences and all SEQUENCE objects in the schema are
reset to their start values.
Use of this statement requires schema ownership or administrative privileges.
Insert Statement
INSERT INTO
insert statement
<insert statement> ::= INSERT INTO <target table> <insert columns and source>
<insert columns and source> ::= <from subquery> | <from constructor> | <from
default>
<from subquery> ::= [ <left paren> <insert column list> <right paren> ]
[ <override clause> ] <query expression>
<from constructor> ::= [ <left paren> <insert column list> <right paren> ]
[ <override clause> ] <contextually typed table value constructor>
<override clause> ::= OVERRIDING USER VALUE | OVERRIDING SYSTEM VALUE
<from default> ::= DEFAULT VALUES
<insert column list> ::= <column name list>
Insert new rows in a table. An INSERT statement inserts one or more rows into the table.
The special form, INSERT INTO <target table> DEFAULT VALUES can be used with tables which have
a default value for each column.
With the other forms of INSERT, the optional (<insert column list>) specifies to which columns of the
table the new values are assigned.
In one form, the inserted values are from a <query expression> and all the rows that are returned by the <query
expression> are inserted into the table. If the <query expression> returns no rows, nothing is inserted.
In the other form, a comma separated list of values called <contextually typed table value
constructor> is used to insert one or more rows into the table. This list is contextually typed, because the keywords
NULL and DEFAULT can be used for the values that are assigned to each column of the table. The keyword DEFAULT
142
means the default value of the column and can be used only if the target column has a default value or is an IDENTITY
or GENERATED column of the table.
The <override clause> must be used when a value is explicitly assigned to a column that has been defined as
GENERATED ALWAYS AS IDENTITY. The clause, OVERRIDE SYSTEM VALUE means the provided values
are used for the insert, while OVERRIDING USER VALUE means the provided values are simply ignored and the
values generated by the system are used instead.
An array can be inserted into a column of the array type by using literals, by specifying a parameter in a prepared
statement or an existing array returned by query expression. The last example below inserts an array.
The rows that are inserted into the table are checked against all the constraints that have been declared on the table.
The whole INSERT operation fails if any row fails to inserted due to constraint violation. Examples:
CREATE TABLE T (A INTEGER GENERATED BY DEFAULT AS IDENTITY, B INTEGER DEFAULT 2)
INSERT INTO T DEFAULT VALUES /* all columns of T have DEFAULT clauses */
INSERT INTO T (SELECT * FROM Z) /* table Z has the same columns as table T */
INSERT INTO T (A,B) VALUES ((1,2),(3,NULL), (DEFAULT,6)) /* three rows are inserted into table T
*/
ALTER TABLE T ADD COLUMN D VARCHAR(10) ARRAY /* an ARRAY column is added */
INSERT INTO T VALUES DEFAULT, 3, ARRAY['hot','cold']
If the table contains an IDENTITY column, the value for this column for the last row inserted by a session can be
retrieved using a call to the IDENTITY() function. This call returns the last value inserted by the calling session. When
the insert statement is executed with a JDBC Statement or PreparedStatement method, the getGeneratedKeys() method
of Statement can be used to retrieve not only the IDENTITY column, but also any GENERATED computed column,
or any other column. The getGeneratedKeys() returns a ResultSet with one or more columns. This contains one row
per inserted row, and can therefore return all the generated columns for a multi-row insert.
There are three methods of specifying which generated keys should be returned. The first method does not specify
the columns of the table. With this method, the returned ResultSet will have a column for each column of the table
that is defined as GENERATED ... AS IDENTITY or GENERATED ... AS (<expression>). The two other methods
require the user to specify which columns should be returned, either by column indexes, or by column names. With
these methods, there is no restriction on which columns of the inserted values to be returned. This is especially useful
when some columns have a default clause which is a function, or when there are BEFORE triggers on the table that
may provide the inserted value for some of the columns.
Update Statement
UPDATE
update statement: searched
<update statement: searched> ::= UPDATE <target table> [ [ AS ] <correlation
name> ] SET <set clause list> [ WHERE <search condition> ][ LIMIT <fetch first
row count> ]
Update rows of a table. An UPDATE statement selects rows from the <target table> using an implicit SELECT
statement formed in the following manner:
SELECT * FROM <target table> [ [ AS ] <correlation name> ] [ WHERE <search
condition> ]
Then it applies the SET <set clause list> expression to each selected row.
If the implicit SELECT returns no rows, no update takes place. When used in JDBC, the number of rows returned by
the implicit SELECT is returned as the update count.
143
If there are FOREIGN KEY constraints on other tables that reference the subject table, and the FOREIGN KEY
constraints have referential actions, then rows from those other tables that reference the updated rows are updated,
according to the specified referential actions.
The rows that are updated are checked against all the constraints that have been declared on the table. The whole
UPDATE operation fails if any row violates any constraint.
The LIMIT clause, or alternatively the ROWNUM() function in the WHERE clause, can be used to limit the number
of rows that are updated. This is useful when a very large number of rows needs to be updated. In this situation, you
can perform the operation is chunks and commit after each chunk to reduce memory usage and the total time of the
operation.
set clause list
set clause list
<set clause list> ::= <set clause> [ { <comma> <set clause> }... ]
<set clause> ::= <multiple column assignment> | <set target> <equals operator>
<update source>
<multiple column assignment> ::= <set target list> <equals operator> <assigned
row>
<set target list> ::= <left paren> <set target> [ { <comma> <set target> }... ]
<right paren>
<assigned row> ::= <contextually typed row value expression>
<set target> ::= <column name>
<update source> ::= <value expression> | <contextually typed value specification>
Specify a list of assignments. This is used in UPDATE, MERGE and SET statements to assign values to a scalar or
row target.
Apart from setting a whole target to a value, a SET statement can set individual elements of an array to new values.
The last example below shows this form of assignment to the array in the column named B.
In the examples given below, UPDATE statements with single and multiple assignments are shown. Note in the third
example, a SELECT statement is used to provide the update values for columns A and C, while the update value for
column B is given separately. The SELECT statement must return exactly one row . In this example the SELECT
statement refers to the existing value for column C in its search condition.
UPDATE
UPDATE
UPDATE
UPDATE
T
T
T
T
SET
SET
SET
SET
A =
(A,
(A,
A =
5 WHERE ...
B) = (1, NULL) WHERE ...
C) = (SELECT X, Y FROM U WHERE Z = C), B = 10 WHERE ...
3, B[3] = 'warm'
Merge Statement
MERGE INTO
merge statement
<merge statement> ::= MERGE INTO <target table> [ [ AS ] <merge correlation
name> ] USING <table reference> ON <search condition> <merge operation
specification>
144
value
element>
<merge insert value element> ::= <value expression> | <contextually typed value
specification>
Update rows, deletes rows or insert new rows into the <target table>. The MERGE statement uses a second
table, specified by <table reference>, to determine the rows to be updated or inserted. It is possible to use the
statement only to update rows, to delete rows or to insert rows, but usually both update and insert are specified.
The <search condition> matches each row of the <table reference> with each row of the <target
table>. If the two rows match then the UPDATE clause is used to update the matching row of the target table. Those
rows of <table reference> that have no matching rows are then used to insert new rows into the <target
table>. Therefore, a MERGE statement can update or delete between 0 and all the rows of the <target table>
and can insert between 0 and the number of the rows in <table reference> into the <target table>. If
any row in the <target table> matches more than one row in <table reference> a cardinality error is
raised. On the other hand, several rows in the <target table> can match a single row in <table reference>
without any error. The constraints and referential actions specified on the database tables are enforced the same way
as for an update, a delete and an insert statement.
The optional <search condition> in each WHEN clause can be used to filter (reduce) the rows for the particular
action.
The MERGE statement can be used with only the WHEN NOT MATCHED clause as a conditional INSERT statement
that inserts a row if no existing rows match a condition.
In the first example below, the table originally contains two rows for different furniture. The <table reference>
is the (VALUES(1, 'conference table'), (14, 'sofa'), (5, 'coffee table')) expression,
which evaluates to a table with 3 rows. When the x value for a row matches an existing row, then the existing row is
updated. When the x value does not match, the row is inserted. Therefore one row of table t is updated from 'dining
table' to 'conference table', and two rows are inserted into table t. The second example uses a SELECT statement as
the source of the values for the MERGE.
In the third example, a new row in inserted into the table only when the primary key for the new row does not exist. This
example uses parameters and should be executed as a JDBC PreparedStatement. The parameter is cast as INTEGER
because the MERGE statement does not determine the types of values in the USING clause.
145
::=
GET
DIAGNOSTICS
<simple
target
value
The <simple target value specification> is a session variable, or a routine variable or OUT parameter.
The keyword ROW_COUNT specifies the row count returned by the last executed statement. For INSERT, UPDATE,
DELETE and MERGE statements, this is the number of rows affected by the statement. This is the same value as
returned by JDBC executeUpdate() methods. For all other statements, zero is returned.
The value of ROW_COUNT is stored in the specified target.
In future versions, more options will be supported for diagnostics values.
146
147
SQL-Invoked Routines
Access to routines can be granted to users with GRANT EXECUTE or GRANT ALL. For example GRANT EXECUTE
ON myroutine TO PUBLIC.
Routine Definition
SQL-Invoked Routines, whether PSM or JRT, are defined using a SQL statement with the same syntax. The part that
is different is the <routine body> which consists of SQL statements in PSM routines or a reference to a Java method
in JRT routines.
Details of Routine definition are discussed in this section. You may start by reading the next two sections which provide
several examples before reading this section for the details.
Routine definition has several mandatory or optional clauses. The complete BNF supported by HyperSQL and the
remaining clauses are documented in this section.
CREATE FUNCTION
CREATE PROCEDURE
routine definition
Routine definition is similar for procedures and functions. A function definition has the mandatory <returns
clause> which is discussed later. The description given so far covers the essential elements of the specification with
the BNF given below.
<schema procedure> ::= CREATE PROCEDURE <schema qualified routine name> <SQL
parameter declaration list> <routine characteristics> <routine body>
<schema function> ::= CREATE FUNCTION <schema qualified routine name> <SQL
parameter declaration list> <returns clause> <routine characteristics> <routine
body>
Parameter declaration list has been described above. For SQL/JRT routines, the <SQL parameter name> is
optional while for SQL/PSM routines, it is required. If the <parameter mode> of a parameter is OUT or INOUT,
it must be specified. The BNF is given below:
<SQL parameter declaration list> ::= <left paren> [ <SQL parameter declaration>
[ { <comma> <SQL parameter declaration> }... ] ] <right paren>
<SQL parameter declaration> ::= [ <parameter mode> ] [ <SQL parameter name> ]
<parameter type>
<parameter mode> ::= IN | OUT | INOUT
<parameter type> ::= <data type>
Return Value and Table Functions
RETURNS
returns clause
The <returns clause> specifies the type of the return value of a function (not a procedure). For all SQL/PSM
functions and ordinary SQL/JRT functions, this is simply a type definition which can be a built-in type, a DOMAIN
type or a DISTINCT type, or alternatively, a TABLE definition. For example, RETURNS INTEGER.
148
SQL-Invoked Routines
For a SQL/JRT function, it is possible to define a <returns table type> for a Java method that returns
a java.sql.ResultSet object. Such SQL/JRT functions are called table functions. Table functions are used
differently from normal functions. A table function can be used in an SQL query expression exactly where a normal
table or view is allowed. At the time of invocation, the Java method is called and the returned ResultSet is transformed
into an SQL table. The column types of the declared TABLE must match those of the ResultSet, otherwise an exception
is raised at the time of invocation.
If a <returns table type> is defined for an SQL/PSM function, the following expression is used inside the
function to return a table: RETURN TABLE ( <query expression> ); In the example blow, a table with
two columns is returned.
RETURN TABLE ( SELECT a, b FROM atable WHERE e = 10 );
Functions that return a table are designed to be used in SELECT statements using the TABLE keyword to form a
joined table.
When a JDBC CallableStatement is used to CALL the function, the table returned from the function call is
returned and can be accessed with the getResultSet() method of the CallableStatement.
<returns clause> ::= RETURNS <returns type>
<returns type> ::= <returns data type> | <returns table type>
<returns table type> ::= TABLE <table function column list>
<table function column list> ::= <left paren> <table function column list
element> [ { <comma> <table function column list element> } ... ] <right paren>
<table function column list element> ::= <column name> <data type>
<returns data type> ::= <data type>
routine body
routine body
Routine body is either one or more SQL statements or a Java reference. The user that defines the routine by issuing the
CREATE FUNCTION or CREATE SCHEMA command must have the relevant access rights to all tables, sequences,
routines, etc. that are accessed by the routine. If another user is given EXECUTE privilege on the routine, then there are
two possibilities, depending on the <rights clause>. This clause refers to the access rights that are checked when
a routine is invoked. The default is SQL SECURITY DEFINER, which means access rights of the definer are used;
therefore no extra checks are performed when the other user invokes the routine. The alternative SQL SECURITY
INVOKER means access rights on all the database objects referenced by the routine are checked for the invoker. This
alternative is not supported by HyperSQL.
<routine body> ::= <SQL routine spec> | <external body reference>
<SQL routine spec> ::= [ <rights clause> ] <SQL routine body>
<rights clause> ::= SQL SECURITY INVOKER | SQL SECURITY DEFINER
SQL routine body
SQL routine body
The routine body of a an SQL routine consists of an statement.
<SQL routine body> ::= <SQL procedure statement>
149
SQL-Invoked Routines
EXTERNAL NAME
external body reference
External name specifies the qualified name of the Java method associated with this routine. HyperSQL 2.3 only
supports Java methods within the classpath. The <external Java reference string> is a quoted string
which starts with CLASSPATH: and is followed by the Java package, class and method names separated with dots.
HyperSQL does not currently support the optional <Java parameter declaration list>.
<external body reference> ::= EXTERNAL NAME <external Java reference string>
<external Java reference string> ::= <jar and class name> <period> <Java method
name> [ <Java parameter declaration list> ]
Routine Characteristics
The <routine characteristics> clause covers several sub-clauses
<routine characteristics> ::= [ <routine characteristic>... ]
<routine characteristic> ::= <language clause> | <parameter style clause> |
SPECIFIC <specific name> | <deterministic characteristic> | <SQL-data access
indication> | <null-call clause> | <returned result sets characteristic> |
<savepoint level indication>
LANGUAGE
language clause
The <language clause> refers to the language in which the routine body is written. It is either SQL or Java. The
default is SQL, so JAVA must be specified for SQL/JRT routines.
<language clause> ::= LANGUAGE <language name>
<language name> ::= SQL | JAVA
The parameter style is not allowed for SQL routines. It is optional for Java routines and, in HyperSQL, the only value
allowed is JAVA.
<parameter style> ::= JAVA
SPECIFIC NAME
specific name
The SPECIFIC <specific name> clause is optional but the engine will creates an automatic name if it is not
present. When there are several versions of the same routine, the <specific name> is used in schema manipulation
statements to drop or alter a specific version. The <specific name> is a user-defined name. It applies to both
functions and procedures. In the examples below, a specific name is specified for each function.
CREATE FUNCTION an_hour_before_or_now(t TIMESTAMP)
RETURNS TIMESTAMP
NO SQL
LANGUAGE JAVA PARAMETER STYLE JAVA
SPECIFIC an_hour_before_or_now_with_timestamp
EXTERNAL NAME 'CLASSPATH:org.npo.lib.nowLessAnHour'
150
SQL-Invoked Routines
DETERMINISTIC
deterministic characteristic
The <deterministic characteristic> clause indicates that a routine is deterministic or not. Deterministic
means the routine does not reference random values, external variables, or time of invocation. The default is NOT
DETERMINISTIC. It is essential to declare this characteristics correctly for an SQL/JRT routine, as the engine does
not know the contents of the Java code, which could include calls to methods returning random or time sensitive values.
<deterministic characteristic> ::= DETERMINISTIC | NOT DETERMINISTIC
SQL DATA access
SQL DATA access characteristic
The <SQL-data access indication> clause indicates the extent to which a routine interacts with the database
or the data stored in the database tables in different schemas (SQL DATA).
NO SQL means no SQL command is issued in the routine body and can be used only for SQL/JRT functions.
CONTAINS SQL means some SQL commands are used, but they do not read or modify the SQL data. READS SQL
DATA and MODIFIES SQL DATA are self explanatory.
A CREATE PROCEDURE definition can use MODIFIES SQL DATA. This is not allowed in CREATE FUNCTION.
Note that a PROCEDURE or a FUNCTION may have internal tables or return a table which is populated by the
routine's statements. These tables are not considered SQL DATA, therefore there is no need to specify MODIFIES
SQL DATA for such routines.
<SQL-data access indication> ::= NO SQL | CONTAINS SQL | READS SQL DATA |
MODIFIES SQL DATA
NULL INPUT
null call clause
Null Arguments
The <null-call clause> is used only for functions. If a function returns NULL when any of the calling
arguments is null, then by specifying RETURNS NULL ON NULL INPUT, calls to the function are known to be
redundant and do not take place when an argument is null. This simplifies the coding of the SQL/JRT Java methods
and improves performance at the same time.
<null-call clause> ::= RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT
SAVEPOINT LEVEL
transaction impact
The <savepoint level indication> is used only for procedures and refers to the visibility of existing
savepoints within the body of the procedure. If NEW SAVEPOINT LEVEL is specified, savepoints that have been
declared prior to calling the procedure become invisible within the body of the procedure. HyperSQLs implementation
accepts only NEW SAVEPOINT LEVEL.
151
SQL-Invoked Routines
<savepoint level indication> ::= NEW SAVEPOINT LEVEL | OLD SAVEPOINT LEVEL
DYNAMIC RESULT SETS
returned result sets characteristic
The <returned result sets characteristic> is used with SQL/PSM and SQL/JRT procedures (not with
functions). The maximum number of result sets that a procedure may return can be specified with the clause below.
The default is zero. If you want your procedure to return result sets, you must specify the maximum number of result
sets that your procedure may return. Details are discussed in the next sections.
<returned result sets characteristic> ::= DYNAMIC RESULT SETS <maximum returned
result sets>
The procedure inserts a row into an existing table with the definition given below:
CREATE TABLE customers(id INTEGER GENERATED BY DEFAULT AS IDENTITY, firstname VARCHAR(50),
lastname VARCHAR(50), added TIMESTAMP);
The routine body is often a compound statement. A compound statement can contain one or more SQL statements,
which can include control statements, as well as nested compound statements.
Please note carefully the use of <semicolon>, which is required at the end of some statements but not accepted
at the end of others.
152
SQL-Invoked Routines
SQL language routines (PSM) do not rely on custom Java classes to be present on the classpath. The databases that
use them are therefore more portable.
For a routine that accesses SQL DATA, all the SQL statements in an SQL routine are known and monitored by the
engine. The engine will not allow a table, routine or sequence that is referenced in an SQL routine to be dropped,
or its structure modified in a way that will break the routine execution. The engine does not keep this information
about a Java routine.
Because the statements in an SQL routine are known to the engine, the execution of an SQL routine locks all
the database objects it needs to access before the actual execution. With Java routines, locks are obtained during
execution and this may cause additional delays in multi threaded access to the database.
For routines that do not access SQL DATA, Java routines (SQL/JRT) may be faster if they perform extensive
calculations.
Only Java routines can access external programs and resources directly.
Routine Statements
The following SQL Statements can be used only in routines. These statements are covered in this section.
<handler declaration>
<table variable declaration>
<variable declaration>
<declare cursor>
<assignment statement>
<compound statement>
<case statement>
<if statement>
<while statement>
<repeat statement>
<for statement>
<loop statement>
<iterate statement
<leave statement>
<signal statement>
<resignal statement>
<return statement>
<select statement: single row>
153
SQL-Invoked Routines
<open statement>
The following SQL Statements can be used in procedures but not in generally in functions (they can be used in functions
only to change the data in a local table variable) . These statements are covered in other chapters of this Guide.
<call statement>
<delete statement>
<insert statement>
<update statement>
<merge statement>
As shown in the examples below, the formal parameters and the variables of the routine can be used in statements,
similar to the way a column reference is used.
Compound Statement
A compound statement is enclosed in a BEGIN / END block with optional labels. It can contain one or more <SQL
variable declaration>, <declare cursor> or <handler declaration> before at least one SQL
statement. The BNF is given below:
<compound statement> ::= [ <beginning label> <colon> ] BEGIN [[NOT] ATOMIC]
[{<SQL variable declaration> <semicolon>} ...]
[{<declare cursor> <semicolon>} ...]
[{<handler declaration> <semicolon>}...]
{<SQL procedure statement> <semicolon>} ...
END [ <ending label> ]
An example of a simple compound statement body is given below. It performs the common task of inserting related data
into two table. The IDENTITY value that is automatically inserted in the first table is retrieved using the IDENTITY()
function and inserted into the second table. Other examples show more complex compound statements.
CREATE PROCEDURE new_customer(firstname VARCHAR(50), lastname VARCHAR(50), address
VARCHAR(100))
MODIFIES SQL DATA
BEGIN ATOMIC
INSERT INTO customers VALUES (DEFAULT, firstname, lastname, CURRENT_TIMESTAMP);
INSERT INTO addresses VALUES (DEFAULT, IDENTITY(), address);
END
Table Variables
A <table variable declaration> defines the name and columns of a local table, that can be used in
the routine body. The table cannot have constraints. Table variable declarations are made before scalar variable
declarations.
BEGIN ATOMIC
DECLARE TABLE temp_table (col_a INT, col_b VARCHAR(20);
DECLARE temp_id INTEGER;
-- more statements
154
SQL-Invoked Routines
END
Variables
A <variable declaration> defines the name and data type of the variable and, optionally, its default value. In
the next example, a variable is used to hold the IDENTITY value. In addition, the formal parameters of the procedure
are identified as input parameters with the use of the optional IN keyword. This procedure does exactly the same job
as the procedure in the previous example.
CREATE PROCEDURE new_customer(IN firstname VARCHAR(50), IN lastname VARCHAR(50), IN address
VARCHAR(100))
MODIFIES SQL DATA
BEGIN ATOMIC
DECLARE temp_id INTEGER;
INSERT INTO CUSTOMERS VALUES (DEFAULT, firstname, lastname, CURRENT_TIMESTAMP);
SET temp_id = IDENTITY();
INSERT INTO ADDRESSES VALUES (DEFAULT, temp_id, address);
END
Cursors
A <declare cursor> statement is used to declare a SELECT statement. The current usage of this statement
in HyperSQL 2.3 is exclusively to return a result set from a procedure. The result set is returned to the JDBC
CallableStatement object that calls the procedure. The getResultSet() method of CallableStatement is then used to
retrieve the JDBC ResultSet.
In the <routine definition>, the DYNAMIC RESULT SETS clause must be used to specify a value above zero.
The DECLARE CURSOR statement is used after any variable declaration in compound statement block. The SELECT
statement should be followed with FOR READ ONLY to avoid possible error messages. The <open statement>
is then executed for the cursor at the point where the result set should be populated.
After the procedure is executed with a JDBC CallableStatement execute() method, all the result sets that were opened
are returned to the JDBC CallableStatement.
155
SQL-Invoked Routines
Calling getResultSet() will return the first ResultSet. When there are multiple result sets, the getMoreResults() method
of the Callable statement is called to move to the next ResultSet, before getResultSet() is called to return the next
ResultSet. See the Data Access and Change chapter on the syntax for declaring the cursor.
BEGIN ATOMIC
DECLARE temp_zero DATE;
DECLARE result CURSOR WITH RETURN FOR SELECT * FROM INFORMATION_SCHEMA.TABLES FOR READ ONLY;
-- more statements ...
OPEN result;
END
Handlers
A <handler declaration> defines the course of action when an exception or warning is raised during the
execution of the compound statement. A compound statement may have one or more handler declarations. These
handlers become active when code execution enters the compound statement block and remain active in any sub-block
and statement within the block. The handlers become inactive when code execution leaves the block.
In the previous example of the new_customer procedure, if an exception is thrown during the execution of either
SQL statement, the execution of the compound statement is terminated and the exception is propagated and thrown
by the CALL statement for the procedure. All changes made by the procedure are rolled back.
A handler declaration can resolve the thrown exception within the compound statement without propagating it, and
allow the execution of the compound statement to continue.
In the example below, the UNDO handler declaration catches any exception that is thrown during the execution of the
compound statement inside the BEGIN ... END block. As it is an UNDO handler, all the changes to data performed
within the compound statement ( BEGIN ... END block) are rolled back. The procedure then returns without
throwing an exception.
CREATE PROCEDURE new_customer(IN firstname VARCHAR(50), IN lastname VARCHAR(50), IN address
VARCHAR(100))
MODIFIES SQL DATA
label_one: BEGIN ATOMIC
DECLARE temp_id INTEGER;
DECLARE UNDO HANDLER FOR SQLEXCEPTION;
INSERT INTO CUSTOMERS VALUES (DEFAULT, firstname, lastname, CURRENT_TIMESTAMP);
SET temp_id = IDENTITY();
INSERT INTO ADDRESSES VALUES (DEFAULT, temp_id, address);
END
Other types of hander are CONTINUE and EXIT handlers. A CONTINUE handler ignores any exception and proceeds
to the next statement in the block. An EXIT handler terminates execution without undoing the data changes performed
by the previous (successful) statements.
The conditions can be general conditions, or specific conditions.
Among general conditions that can be specified, SQLEXCEPTION covers all exceptions, SQLWARNING covers all
warnings, while NOT FOUND covers the not-found condition, which is raised when a DELETE, UPDATE, INSERT
or MERGE statement completes without actually affecting any row.
Alternatively, one or more specific conditions can be specified (separated with commas) which apply to specific
exceptions or warnings or classes or exceptions or warnings. A specific condition is specified with SQLSTATE
<value>, for example SQLSTATE 'W_01003' specifies the warning raised after a SQL statement is executed
which contains an aggregate function which encounters a null value during execution. An example is given below
which activates the handler when either of the two warnings is raised:
DECLARE UNDO HANDLER FOR SQLSTATE 'W_01003', 'W_01004';
156
SQL-Invoked Routines
The <SQL procedure statement> in the handler declaration is required by the SQL Standard but is optional in
HyperSQL. If the execution of the <SQL procedure statement> specified in the handler declaration throws an
exception itself, then it is handled by the handlers that are currently active at an enclosing (outer) BEGIN ... END
block. The <SQL procedure statement> can itself be a compound statement with its own handlers.
When a handler handles an exception condition such as the general SQLEXCEPTION or some specific SQLSTATE,
any changes made by the statement that caused the exception will be rolled back. For example, execution of a single
update statement that modifies several rows will not change any row if an exception occurs during the update of one
of the rows. The handler action affects the changes made by statements that were executed successfully before the
exception occured.
Actions performed by different types of handler are listed below:
An UNDO handler rolls back all the data changes within the BEGIN ... END block which contains the handler
declaration. The execution of the BEGIN ... END block is considered complete. If an <SQL procedure
statement> is specified, it is executed after the roll back.
A CONTINUE handler does not roll back the data changes. It continues execution as if the last statement was
successful. If an <SQL procedure statement> is specified, it is executed before continuing execution.
An EXIT handler does not roll back the data changes. It aborts the execution of the BEGIN ... END block
which contains the handler declaration. The execution of the BEGIN ... END block is considered complete, but
unlike the UNDO handler the actions are not rolled back. If an <SQL procedure statement> is specified,
it is executed before aborting.
Assignment Statement
The SET statement is used for assignment. It can be used flexibly with rows or single values. The BNF is given below:
<assignment statement> ::= <singleton variable assignment> | <multiple variable
assignment>
157
SQL-Invoked Routines
::=
<target
specification>
<comma>
<target
Retrieve values from a specified row of a table and assign the fields to the specified targets. The example below has
an identical effect to the example of SET statement given above.
SELECT col1, col2 INTO arg1, arg2 FROM atable WHERE id = 10;
Formal Parameters
Each parameter of a procedure can be defined as IN, OUT or INOUT. An IN parameter is an input to the procedure
and is passed by value. The value cannot be modified inside the procedure body. An OUT parameter is a reference
for output. An INOUT parameter is a reference for both input and output. An OUT or INOUT parameter argument is
passed by reference, therefore only a dynamic parameter argument or a variable within an enclosing procedure can be
passed for it. The assignment statement is used to assign a value to an OUT or INOUT parameter.
In the example below, the procedure is declared with an OUT parameter. It assigns the auto-generated IDENTITY
value from the INSERT statement to the OUT argument.
CREATE PROCEDURE new_customer(OUT newid INT, IN firstname VARCHAR(50), IN lastname VARCHAR(50),
IN address VARCHAR(100))
MODIFIES SQL DATA
BEGIN ATOMIC
DECLARE temp_id INTEGER;
INSERT INTO CUSTOMERS VALUES (DEFAULT, firstname, lastname, CURRENT_TIMESTAMP);
SET temp_id = IDENTITY();
INSERT INTO ADDRESSES VALUES (DEFAULT, temp_id, address);
SET newid = temp_id;
END
158
SQL-Invoked Routines
In the SQL session, or in the body of another stored procedure, a variable must be assigned to the OUT parameter.
After the procedure call, this variable will hold the new identity value that was generated inside the procedure. If the
procedure is called directly, using the JDBC CallableStatement interface, then the value of the first, OUT argument
can be retrieved with a call to getInt(1)after calling the execute() method.
In the example below, a session variable, the_new_id is declared. After the call to new_customer, the value
for the identity is stored in the_new_id variable. This is returned via the next CALL statement. Alternatively,
the_new_id can be used as an argument to another CALL statement.
DECLARE the_new_id INT DEFAULT NULL;
CALL new_customer(the_new_id, 'John', 'Smith', '10 Parliament Square');
CALL the_new_id;
Iterated Statements
Various iterated statements can be used in routines. In these statements, the <SQL statement list> consists of
one or more SQL statements. The <search condition> can be any valid SQL expression of BOOLEAN type.
LOOP
loop statement
<loop statement> ::= [ <beginning label> <colon> ] LOOP <SQL statement list>
END LOOP [ <ending label> ]
WHILE
while statement
<while statement> ::= [ <beginning label> <colon> ] WHILE <search condition> DO
<SQL statement list> END WHILE [ <ending label> ]
REPEAT
repeat statement
<repeat statement> ::= [ <beginning label> <colon> ]
REPEAT <SQL statement list> UNTIL <search condition> END REPEAT [ <ending label>
In the example below, a multiple rows are inserted into a table in a WHILE loop:
loop_label: WHILE my_var > 0 DO
INSERT INTO CUSTOMERS VALUES (DEFAULT, my_var);
SET my_var = my_var - 1;
IF my_var = 10 THEN SET my_var = 8; END IF;
IF my_var = 22 THEN LEAVE loop_label; END IF;
END WHILE loop_label;
159
SQL-Invoked Routines
FOR
for statement
<for statement> ::= [ <beginning label> <colon> ] FOR <query expression> DO <SQL
statement list> END FOR [ <ending label> ]
The <query expression> is a SELECT statement. When the FOR statement is executed, the query expression is executed
first and the result set is formed. Then for each row of the result set, the <SQL statement list> is executed.
What is special about the FOR statement is that all the columns of the current row can be accessed by name in the
statements in the <SQL statement list>. The columns are read only and cannot be updated. For example, if the
column names for the select statement are ID, FIRSTNAME, LASTNAME, then these can be accessed as a variable
name. The column names must be unique and not equivalent to any parameter or variable name in scope.
The FOR statement is useful for computing values over multiple rows of the result set, or for calling a procedure for
some row of the result set.
In the example below, the procedure uses a FOR statement to iterate over the rows for a customer with lastname equal
to name_p. No action is performed for the first row, but for all the subsequent rows, the row is deleted from the table.
Note the following: The result set for the SELECT statement is built only once, before processing the statements inside
the FOR block begins. For all the rows of the SELECT statement apart from the first row, the row is deleted from
the customer table. The WHERE condition uses the automatic variable id, which holds the customer.id value for the
current row of the result set, to delete the row. The procedure updates the val_p argument and when it returns, the
val_p represents the total count of rows with the given lastname before the duplicates were deleted.
CREATE PROCEDURE test_proc(INOUT val_p INT, IN lastname_p VARCHAR(20))
MODIFIES SQL DATA
BEGIN ATOMIC
SET val_p = 0;
for_label: FOR SELECT * FROM customer WHERE lastname = lastname_p DO
IF val_p > 0 THEN
DELETE FROM customer WHERE customer.id = id;
END IF;
SET val_p = val_p + 1;
END FOR for_label;
END
Conditional Statements
There are two types of CASE ... WHEN statement and the IF ... THEN statement.
CASE WHEN
case when statement
The simple case statement uses a <case operand> as the predicand of one or more predicates. For the right part
of each predicate, it specifies one or more SQL statements to execute if the predicate evaluates TRUE. If the ELSE
clause is not specified, at least one of the search conditions must be true, otherwise an exception is raised.
<simple case statement> ::= CASE <case operand> <simple case statement when
clause>... [ <case statement else clause> ] END CASE
<simple case statement when clause> ::= WHEN <when operand list> THEN <SQL
statement list>
<case statement else clause> ::= ELSE <SQL statement list>
160
SQL-Invoked Routines
A skeletal example is given below. The variable var_one is first tested for equality with 22 or 23 and if the test evaluates
to TRUE, then the INSERT statement is performed and the statement ends. If the test does not evaluate to TRUE,
the next condition test, which is an IN predicate, is performed with var_one and so on. The statement after the ELSE
clause is performed if none the previous tests returns TRUE.
CASE var_one
WHEN 22, 23 THEN INSERT INTO t_one ...;
WHEN IN (2, 4, 5) THEN DELETE FROM t_one WHERE ...;
ELSE UPDATE t_one ...;
END CASE
The searched case statement uses one or more search conditions, and for each search condition, it specifies one or
more SQL statements to execute if the search condition evaluates TRUE. An exception is raised if there is no ELSE
clause and none of the search conditions evaluates TRUE.
<searched case statement> ::= CASE <searched case statement when clause>...
[ <case statement else clause> ] END CASE
<searched case statement when clause> ::= WHEN <search condition> THEN <SQL
statement list>
The example below is partly a rewrite of the previous example, but a new condition is added:
CASE WHEN var_one = 22 OR var_one = 23 THEN INSERT INTO t_one ...;
WHEN var_one IN (2, 4, 5) THEN DELETE FROM t_one WHERE ...;
WHEN var_two IS NULL THEN UPDATE t_one ...;
ELSE UPDATE t_one ...;
END CASE
IF
if statement
The if statement is very similar to the searched case statement. The difference is that no exception is raised if there is
no ELSE clause and no search condition evaluates TRUE.
<if statement> ::= IF <search condition> <if statement then clause> [ <if
statement elseif clause>... ] [ <if statement else clause> ] END IF
<if statement then clause> ::= THEN <SQL statement list>
<if statement elseif clause> ::= ELSEIF <search condition> THEN <SQL statement
list>
<if statement else clause> ::= ELSE <SQL statement list>
Return Statement
The RETURN statement is required and used only in functions. The body of a function is either a RETURN statement,
or a compound statement that contains a RETURN statement.
The return value of a FUNCTION can be assigned to a variable, or used inside an SQL statement.
An SQL/PSM function or an SQL/JRT function can return a single result when the function is defined as RETURNS
TABLE ( .. )
161
SQL-Invoked Routines
To return a table from a SELECT statement, you should use a return statement such as RETURN TABLE( SELECT ...)
in an SQL/PSM function. For an SQL/JRT function, the Java method should return a JDBCResultSet instance.
To call a function from JDBC, use a java.sql.CallableStatement instance. The getResultSet() call can be used to
access the ResultSet returned from a function that returns a result set. If the function returns a scalar value, the returned
result has a single column and a single row which contains the scalar returned value.
RETURN
return statement
<return statement> ::= RETURN <return value>
<return value> ::= <value expression> | NULL
Return a value from an SQL function. If the function is defined as RETURNS TABLE, then the value is a TABLE
expression such as RETURN TABLE(SELECT ...) otherwise, the value expression can be any scalar expression. In
the examples below, the same function is written with or without a BEGIN END block. In both versions, the RETURN
value is a scalar expression.
CREATE FUNCTION an_hour_before_max (e_type INT)
RETURNS TIMESTAMP
RETURN (SELECT MAX(event_time) FROM atable WHERE event_type = e_type) - 1 HOUR
CREATE FUNCTION an_hour_before_max (e_type INT)
RETURNS TIMESTAMP
BEGIN ATOMIC
DECLARE max_event TIMESTAMP;
SET max_event = SELECT MAX(event_time) FROM atable WHERE event_type = e_type;
RETURN max_event - 1 HOUR;
END
Control Statements
In addition to the RETURN statement, the following statements can be used in specific contexts.
ITERATE STATEMENT
The ITERATE statement can be used to cause the next iteration of a labelled iterated statement (a WHILE, REPEAT
or LOOP statement). It is similar to the "continue" statement in C and Java.
<iterate statement> ::= ITERATE <statement label>
LEAVE STATEMENT
The LEAVE statement can be used to leave a labelled block. When used in an iterated statement, it is similar to the
"break" statement is C and Java. But it can be used in compound statements as well.
<leave statement> ::= LEAVE <statement label>
Raising Exceptions
Signal and Resignal Statements allow the routine to throw an exception. If used with the IF or CASE conditions, the
exception is thrown conditionally.
SIGNAL
162
SQL-Invoked Routines
signal statement
The SIGNAL statement is used to throw an exception (or force an exception). When invoked, any exception handler
for the given exception is in turn invoked. If there is no handler, the exception is propagated to the enclosing context. In
its simplest form, when there is no exception handler for the given exception, routine execution is halted, any change
of data is rolled back and the routine throws the exception. By default, the message for the exception is taken from
the predefined exception message for the specified SQLSTATE. A custom message can be specified with the optional
SET clause.
<signal statement> ::= SIGNAL SQLSTATE <state value> [ SET MESSAGE_TEXT =
<character string literal> ]
RESIGNAL
resignal statement
The RESIGNAL statement is used to throw an exception from an exception handler's <SQL procedure
statement>, in effect propagating the exception to the enclosing context without further action by the currently
active handlers. By default, the message for the exception is taken from the predefined exception message for the
specified SQLSTATE. A custom message can be specified with the optional SET clause.
<resignal statement> ::= RESIGNAL SQLSTATE <state value> [ SET MESSAGE_TEXT =
<character string literal> ]
Routine Polymorphism
More than one version of a routine can be created.
For procedures, the different versions must have different parameter counts. When the procedure is called, the
parameter count determines which version is called.
For functions, the different versions can have the same or different parameter counts. When the parameter count of
two versions of a function is the same, the type of parameters must be different. When the function is called, the best
matching version of the function is used, according to both the parameter count and parameter types. The return type
of different versions of a function can be the same or different.
Two versions of an overloaded function are given below. One version accepts TIMESTAMP while the other accepts
TIME arguments.
CREATE FUNCTION an_hour_before_or_now(t TIMESTAMP)
RETURNS TIMESTAMP
IF t > CURRENT_TIMESTAMP THEN
RETURN CURRENT_TIMESTAMP;
ELSE
RETURN t - 1 HOUR;
END IF
CREATE FUNCTION an_hour_before_or_now(t TIME)
RETURNS TIME
CASE t
WHEN > CURRENT_TIME THEN
RETURN CURRENT_TIME;
WHEN >= TIME'01:00:00' THEN
RETURN t - 1 HOUR;
ELSE
RETURN CURRENT_TIME;
END CASE
It is perfectly possible to have different versions of the routine as SQL/JRT or SQL/PSM routines.
163
SQL-Invoked Routines
In the example below a procedure has one IN argument and two OUT arguments. The JDBC CallableStatement is
used to retrieve the values returned in the OUT arguments.
CREATE PROCEDURE get_customer(IN id INT, OUT firstname VARCHAR(50), OUT lastname VARCHAR(50))
READS SQL DATA
BEGIN ATOMIC
-- this statement uses the id to get firstname and lastname
SELECT first_name, last_name INTO firstname, lastname FROM customers WHERE cust_id = id;
END
Connection conn = ...;
CallableStatement call = conn.prepareCall("call get_customer(?, ?, ?)");
call.setInt(1, 121); // only the IN (or INOUT) arguments should be set before the call
call.execute();
164
SQL-Invoked Routines
String firstname = call.getString(2); // the OUT (or INOUT) arguments are retrieved after the
call
String lastname = call.getString(3);
SQL/JRT procedures are discussed in the Java Language Procedures section below. Those routines are called exactly
the same way as SQL/PSM procedures, using the JDBC CallableStatement interface.
It is also possible to use a JDBC Statement or PreparedStatement object to call a procedure if the procedure arguments
are constant. If the procedure returns one or more result sets, the Statement#getMoreResults() method should be called
before retrieving the ResultSet.
Java functions are called from JDBC similar to procedures. With functions, the getMoreResuls() method should not
be called at all.
Recursive Routines
Routines can be recursive. Recursive functions are often functions that return arrays or tables. To create a recursive
routine, the routine definition must be created first with a dummy body. Then the ALTER ROUTINE statement is
used to define the routine body.
In the example below, the table contains a tree of rows each with a parent. The routine returns an array containing the id
list of all the direct and indirect children of the given parent. The routine appends the array variable id_list with the id of
each direct child and for each child appends the array with the id array of its children by calling the routine recursively.
The routine can be used in a SELECT statement as the example shows.
CREATE TABLE ptree (pid INT, id INT);
INSERT INTO ptree VALUES (NULL, 1) ,(1,2), (1,3),(2,4),(4,5),(3,6),(3,7);
-- the function is created and always throws an exception when used
CREATE FUNCTION child_arr(p_pid INT) RETURNS INT ARRAY
SPECIFIC child_arr_one
READS SQL DATA
SIGNAL SQLSTATE '45000'
-- the actual body of the function is defined, replacing the statement that throws the exception
ALTER SPECIFIC ROUTINE child_arr_one
BEGIN ATOMIC
DECLARE id_list INT ARRAY DEFAULT ARRAY[];
for_loop:
FOR SELECT id FROM ptree WHERE pid = p_pid DO
SET id_list[CARDINALITY(id_list) + 1] = id;
SET id_list = id_list || child_arr(id);
END FOR for_loop;
RETURN id_list;
END
-- the function can now be used in SQL statements
SELECT * FROM TABLE(child_arr(2))
In the next example, a table with two columns is returned instead of an array. In this example, a local table variable
is declared and filled with the children and the children's children.
CREATE FUNCTION child_table(p_pid INT) RETURNS TABLE(r_pid INT, r_id INT)
SPECIFIC child_table_one
READS SQL DATA
SIGNAL SQLSTATE '45000'
ALTER SPECIFIC ROUTINE child_table_one
BEGIN ATOMIC
DECLARE TABLE child_tree (pid INT, id INT);
165
SQL-Invoked Routines
for_loop:
FOR SELECT pid, id FROM ptree WHERE pid = p_pid DO
INSERT INTO child_tree VALUES pid, id;
INSERT INTO child_tree SELECT r_pid, r_id FROM TABLE(child_table(id));
END FOR for_loop;
RETURN TABLE(SELECT * FROM child_tree);
END
SELECT * FROM TABLE(child_table(1))
Infinite recursion is not possible as the routine is terminated when a given depth is reached.
In the example below, the static method named toZeroPaddedString is specified to be called when the function
is invoked.
CREATE FUNCTION zero_pad(x BIGINT, digits INT, maxsize INT)
RETURNS CHAR VARYING(100)
LANGUAGE JAVA DETERMINISTIC NO SQL
EXTERNAL NAME 'CLASSPATH:org.hsqldb.lib.StringUtil.toZeroPaddedString'
The signature of the Java method (used in the Java code but not in SQL code to create the function) is given below:
public static String toZeroPaddedString(long value, int precision, int maxSize)
The parameter and return types of the SQL routine definition must match those of the Java method according to the
table below:
SMALLINT
short or Short
INT
int or Integer
BIGINT
long or Long
NUMERIC or DECIMAL
BigDecimal
FLOAT or DOUBLE
double or Double
CHAR or VARCHAR
String
DATE
java.sql.Date
TIME
java.sql.Time
TIMESTAMP
java.sql.Timestamp
BINARY
Byte[]
BOOLEAN
boolean or Boolean
166
SQL-Invoked Routines
java.sql.Array
TABLE
java.sql.ResultSet
If the specified Java method is not found or its parameters and return types do not match the definition, an exception
is raised. If more than one version of the Java method exist, then the one with matching parameter and return types is
found and registered. If two equivalent methods exist, the first one is registered. (This situation arises only when a
parameter is a primitive in one version and an Object in another version, e.g. long and java.lang.Long.).
When the Java method of an SQL/JRT routine returns a value, it should be within the size and precision limits defined
in the return type of the SQL-invoked routine, otherwise an exception is raised. Any difference in numeric scale is
ignored and corrected. For example, in the above example, the RETURNS CHAR VARYING(100) clause limits the
length of the strings returned from the Java method to 100. But if the number of digits after the decimal point (scale) of
a returned BigDecimal value is larger than the scale specified in the RETURNS clause, the decimal fraction is silently
truncated and no exception of warning is raised.
When the function is specified as RETURNS TABLE(...) the static Java method should return a JDBCResultSet
instance. For an example of how to construct a JDBCResultSet for this purpose, see the source code for the
org.hsqldb.jdbc.JDBCArray class.
Polymorphism
If two versions of the same SQL invoked routine with different parameter types are required, they can be defined to
point to the same method name or different method names, or even methods in different classes. In the example below,
the first two definitions refer to the same method name in the same class. In the Java class, the two static methods are
defined with corresponding method signatures.
In the third example, the Java function returns a result set and the SQL declaration includes RETURNS TABLE.
CREATE FUNCTION an_hour_before_or_now(t TIME)
RETURNS TIME
NO SQL
LANGUAGE JAVA PARAMETER STYLE JAVA
EXTERNAL NAME 'CLASSPATH:org.npo.lib.nowLessAnHour'
CREATE FUNCTION an_hour_before_or_now(t TIMESTAMP)
RETURNS TIMESTAMP
NO SQL
LANGUAGE JAVA PARAMETER STYLE JAVA
EXTERNAL NAME 'CLASSPATH:org.npo.lib.nowLessAnHour'
CREATE FUNCTION testquery(i INTEGER)
RETURNS TABLE(n VARCHAR(20), i INT)
READS SQL DATA
LANGUAGE JAVA
EXTERNAL NAME 'CLASSPATH:org.hsqldb.test.TestJavaFunctions.getQueryResult'
In the Java class the definitions are as follows. Note the definition of the getQueryResult method begins with a
java.sql.Connection parameter. This parameter is ignored when choosing the Java method. The parameter is used to
pass the current JDBC connection to the Java method.
public static java.sql.Time nowLessAnHour(java.sql.Time value) {
...
}
public static java.sql.Timestamp nowLessAnHour(java.sql.Timestamp value)
...
}
167
SQL-Invoked Routines
In the next example a procedure is defined to return a result set. The signature of the Java method is also given. The
Java method assigns a ResultSet object to the zero element of the result parameter.
CREATE PROCEDURE new_customer(firstname VARCHAR(50), lastname VARCHAR(50))
MODIFIES SQL DATA
168
SQL-Invoked Routines
LANGUAGE JAVA
DYNAMIC RESULT SETS 1
EXTERNAL NAME 'CLASSPATH:org.hsqldb.test.Test01.newCustomerProcedure'
public static void newCustomerProcedure(String firstn, String lastn,
ResultSet[] result) throws java.sql.SQLException {
result[0] = someresultset; // dynamic result set is assigned
}
Java language procedures SQL/JRT are used in an identical manner to SQL/PSM routines. See the section under
SQL/PSM routines, Returning Data From Procedures, on how to use the JDBC CallableStatement interface to call the
procedure and to get the OUT and INOUT arguments and to use the ResultSet objects returned by the procedure.
In the first example, the "jdbc:default:connection" method is used. In the second example, a connection
parameter is used
public static void procTest2(int p1, int p2,
169
SQL-Invoked Routines
When the stored procedure is called by the user's program, the value of the OUT parameter can be read after the call.
// a CallableStatement is used to prepare the call
// the OUT parameter contains the return value
CallableStatement c = conn.prepareCall("call proc1(1,2,?)");
c.execute();
int value = c.getInt(1);
Legacy Support
The legacy HyperSQL statement, CREATE ALIAS <name> FOR <fully qualified Java method
name> is no longer supported directly. It is supported when importing databases and translates to a special CREATE
FUNCTION <name> statement that creates the function in the PUBLIC schema.
The direct use of a Java method as a function is still supported but deprecated. It is internally translated to a special
CREATE FUNCTION statement where the name of the function is the double quoted, fully qualified name of the
Java method used.
The above example allows access to the methods in the two classes: org.me.MyClass and
org.you.YourClass together with all the classes in the org.you.lib package. Note that if the property is not
defined, no access control is performed at this level.
The user who creates a Java routine must have the relevant access privileges on the tables that are used inside the
Java method.
170
SQL-Invoked Routines
Once the routine has been defined, the normal database access control applies to its user. The routine can be executed
only by those users who have been granted EXECUTE privileges on it. Access to routines can be granted to users with
GRANT EXECUTE or GRANT ALL. For example GRANT EXECUTE ON myroutine TO PUBLIC.
Warning
The definition of SQL/JRT routines referencing the user's Java static methods is stored in the .script file of the database.
If the database is opened in a Java environment that does not have access to the referenced Java static methods on
its classpath, the SQL/JRT routines are not created when the database is opened. When the database is closed, the
routine definitions are lost.
There is a workaround to prevent opening the database when the static method are not on the classpath. You can create
an SQL/PSM procedure which calls all the SQL/JRT functions and procedures in your database. The calls should have
the necessary dummy arguments. This procedure will fail to be created when the referenced methods are not accessible
and will return "Error in script file". There is no need ever to execute the procedure. However, to avoid accidental use,
you can ensure that it does not execute the SQL/JRT routines by adding a line such as IF TRUE THEN SIGNAL
SQLSTATE '45000'; before any references to the SQL/JRT routines.
171
SQL-Invoked Routines
The return type is user defined. This is the type of the resulting value when the function is called. Usually an aggregate
function is defined with CONTAINS SQL, as it normally does not read the data in database tables, but it is possible
to define the function with READS SQL DATA and access the database tables.
When a SQL statement that uses the aggregate function is executed, HyperSQL invokes the aggregate function, with
all the arguments set, once per each row in order to compute the values. Finally, it invokes the function once more
to return the final result.
In the computation phase, the first argument is the value of the user argument as specified in the SQL statement,
computed for the current row. The second argument is the boolean FALSE. The third and fourth argument values can
have any type and are initially null, but they can be updated in the body of the function during each invocation. The
third and fourth arguments act as registers and hold their values between invocations. The return value of the function
is ignored during the computation phase (when the second parameter is FALSE).
After the computation phase, the function is invoked once more to get the final result. In this invocation, the first
argument is NULL and the second argument is boolean TRUE. The third and fourth arguments hold the values they
held at the end of the last invocation. The value returned by the function in this invocation is used as the result of the
aggregate function computation in the invoking SQL statement. In SQL queries with GROUP BY, the call sequence
is repeated separately for each separate group.
The user-defined aggregate function is used in a select statement in the example below. Only the first parameter is
visible and utilised in the select statement.
SELECT udavg(id) FROM customers GROUP BY lastname;
In the example below, the function returns an array that contains all the values passed for the aggregated column. For
use with longer arrays, you can optimise the function by defining a larger array in the first iteration, and using the
TRIM_ARRAY function on the RETURN to cut the array to size. This function is similar to the built-in ARRAY_AGG
function
CREATE AGGREGATE FUNCTION array_aggregate(IN val VARCHAR(100), IN flag boolean, INOUT buffer
VARCHAR(100) ARRAY, INOUT counter INT)
RETURNS VARCHAR(100) ARRAY
CONTAINS SQL
BEGIN ATOMIC
IF flag THEN
RETURN buffer;
ELSE
172
SQL-Invoked Routines
The tables and data for the select statement below are created with the DatabaseManager or DatabaseManagerSwing
GUI apps. (You can find the SQL in the TestSelf.txt file in the zip). Part of the output is shown. Each row of the output
includes an array containing the values for the invoices for each customer.
SELECT ID, FIRSTNAME, LASTNAME, ARRAY_AGGREGATE(CAST(INVOICE.TOTAL AS VARCHAR(100)))
FROM customer JOIN INVOICE ON ID =CUSTOMERID
GROUP BY ID, FIRSTNAME, LASTNAME
11
12
13
14
18
20
Susanne
John
Michael
James
Sylvia
Bob
Karsen
Peterson
Clancy
King
Clancy
Clancy
ARRAY['3988.20']
ARRAY['2903.10','4382.10','4139.70','3316.50']
ARRAY['6525.30']
ARRAY['3665.40','905.10','498.00']
ARRAY['634.20','4883.10']
ARRAY['3414.60','744.60']
In the example below, the function returns a string that contains the comma-separated list of all the values passed for
the aggregated column. This function is similar to the built in GROUP_CONCAT function.
CREATE AGGREGATE FUNCTION group_concatenate
(IN val VARCHAR(100), IN flag BOOLEAN, INOUT buffer VARCHAR(1000), INOUT counter INT)
RETURNS VARCHAR(1000)
CONTAINS SQL
BEGIN ATOMIC
IF FLAG THEN
RETURN BUFFER;
ELSE
IF val IS NULL THEN RETURN NULL; END IF;
IF buffer IS NULL THEN SET BUFFER = ''; END IF;
IF counter IS NULL THEN SET COUNTER = 0; END IF;
IF counter > 0 THEN SET buffer = buffer || ','; END IF;
SET buffer = buffer + val;
SET counter = counter + 1;
RETURN NULL;
END IF;
END
The same tables and data as for the previous example is used. Part of the output is shown. Each row of the output is
a comma-separated list of names.
SELECT group_concatenate(firstname || ' ' || lastname) FROM customer GROUP BY lastname
Laura Steel,John Steel,John Steel,Robert Steel
Robert King,Robert King,James King,George King,Julia King,George King
Robert Sommer,Janet Sommer
Michael Smith,Anne Smith,Andrew Smith
Bill Fuller,Anne Fuller
Laura White,Sylvia White
Susanne Clancy,Michael Clancy,Sylvia Clancy,Bob Clancy,Susanne Clancy,John Clancy
173
SQL-Invoked Routines
No argument is defined as a primitive or primitive array type. This allows nulls to be passed to the function. The second
and third arguments must be defined as arrays of the JDBC non-primitive types listed in the table in the previous
section.
In the example below, a user-defined aggregate function for geometric mean is defined.
CREATE AGGREGATE FUNCTION geometric_mean(IN val DOUBLE, IN flag BOOLEAN, INOUT register DOUBLE,
INOUT counter INT)
RETURNS DOUBLE
NO SQL
LANGUAGE JAVA
EXTERNAL NAME 'CLASSPATH:org.hsqldb.test.Test01.geometricMean'
In a select statement, the function is used exactly like the built-in aggregate functions:
SELECT geometric_mean(age) FROM
FROM customer
174
Chapter 9. Triggers
Fred Toussi, The HSQL Development Group
$Revision: 3042 $
Copyright 2010-2016 Fred Toussi. Permission is granted to distribute this document without any alteration
under the terms of the HSQLDB license. Additional permission is granted to the HSQL Development Group
to distribute this document with or without alterations under the terms of the HSQLDB license.
2016-05-15 15:57:21-0400
Overview
Trigger functionality first appeared in SQL:1999. Triggers embody the live database concept, where changes in SQL
data can be monitored and acted upon. This means each time a DELETE, UPDATE or INSERT is performed, additional
actions are taken by the declared triggers. SQL Standard triggers are imperative while the relational aspects of SQL
are declarative. Triggers allow performing an arbitrary transformation of data that is being updated or inserted, or to
prevent insert, updated or deletes, or to perform additional operations.
Some bad examples of SQL triggers in effect enforce an integrity constraint which would better be expressed as a
CHECK constraint. A trigger that causes an exception if the value inserted in a column is negative is such an example.
A check constraint that declares CHECK VALUE >= 0 (declarative) is a better way of expressing an integrity
constraint than a trigger that throws an exception if the same condition is false.
Usage constraints cannot always be expressed by SQLs integrity constraint statements. Triggers can enforce these
constraints. For example, it is not possible to use a check constraint to prevent data inserts or deletes on weekends. A
trigger can be used to enforce the time when each operation is allowed.
A trigger is declared to activate when an UPDATE, INSERT or DELETE action is performed on a table. These actions
may be direct or indirect. Indirect actions may arise from CASCADE actions of FOREIGN KEY constraints, or from
data change statements performed on a VIEW that is based on the table that in.
It is possible to declare multiple triggers on a single table. The triggers activate one by one according to the order in
which they were defined. HyperSQL supports an extension to the CREATE TRIGGER statement, which allows the
user to specify the execution order of the new trigger.
A row level trigger allows access to the deleted or inserted rows. For UPDATE actions there is both an old and new
version of each row. A trigger can be specified to activate before or after the action has been performed.
BEFORE Triggers
A trigger that is declared as BEFORE DELETE cannot modify the deleted row. In other words, it cannot decide to
delete a different row by changing the column values of the row. A trigger that is declared as BEFORE INSERT and
BEFORE UPDATE can modify the values that are inserted into the database. For example, a badly formatted string
can be cleaned up by a trigger before INSERT or UPDATE.
BEFORE triggers cannot modify the other tables of the database. All BEFORE triggers can veto the action by throwing
an exception.
Because BEFORE triggers can modify the inserted or updated rows, all constraint checks are performed after the
execution of the BEFORE triggers. The checks include NOT NULL constraints, length of strings, CHECK constraints,
and FOREIGN key constraints.
175
Triggers
AFTER Triggers
AFTER triggers can perform additional data changes, for example inserting an additional row into a different table for
data audits or logs. These triggers cannot modify the rows that have been modified by the INSERT or UPDATE action.
INSTEAD OF Triggers
A trigger that is declared on a VIEW, is an INSTEAD OF trigger. This term means when an INSERT, UPDATE or
DELETE statement is executed with the view as the target, the trigger action is all that is performed, and no further
data change takes place on the view. The trigger action can include all the statements that are necessary to change
the data in the tables that underlie the view, or even other tables, such as audit tables. With the use of INSTEAD OF
triggers a read-only view can effectively become updatable or insertable-into.
An example of INSTEAD OF TRIGGERS is one that performs an INSERT, UPDATE or DELETE on multiple tables
that are used in the view.
Trigger Properties
A trigger is declared on a specific table or view. Various trigger properties determine when the trigger is executed
and how.
Trigger Event
The trigger event specifies the type of SQL statement that causes the trigger to execute. Each trigger is specified to
execute when an INSERT, DELETE or UPDATE takes place.
The event can be filtered by two separate means. For all triggers, the WHEN clause can specify a condition against
the rows that are the subject of the trigger, together with the data in the database. For example, a trigger can activate
when the size of a table becomes larger than a certain amount. Or it can activate when the values in the rows being
modified satisfy certain conditions.
An UPDATE trigger can be declared to execute only when certain columns are the subject of an update statement. For
example, a trigger declared as AFTER UPDATE OF (datecolumn) will activate only when the UPDATE statement
that is executed includes the column, datecolumn, as one of the columns specified in its SET statements.
Granularity
A statement level trigger is performed once for the executed SQL statement and is declared as FOR EACH
STATEMENT.
A row level trigger is performed once for each row that is modified during the execution of an SQL statement and is
declared as FOR EACH ROW. Note that an SQL statement can INSERT, UPDATE or DELETE zero or more rows.
If a statement does not apply to any row, then the trigger is not executed.
If FOR EACH ROW or FOR EACH STATEMENT is not specified, then the default is FOR EACH STATEMENT.
The granularity dictates whether the REFERENCING clause can specify OLD ROW, NEW ROW, or OLD TABLE,
NEW TABLE.
A trigger declared as FOR EACH STATEMENT can only be an AFTER trigger. These triggers are useful for logging
the event that was triggered.
176
Triggers
References to Rows
If the old rows or new rows are referenced in the SQL statements in the trigger action, they must have names. The
REFERENCING clause is used to give names to the old and new rows. The clause, REFERENCING OLD | NEW
TABLE is used for statement level triggers. The clause, REFERENCING OLD | NEW ROW is used for row level
triggers. If the old rows or new rows are referenced in the SQL statements in the trigger action, they must have names.
In the SQL statements, the columns of the old or new rows are qualified with the specified names.
Trigger Condition
The WHEN clause can specify a condition for the columns of the row that is being changed. Using this clause you can
simply avoid unnecessary trigger activation for rows that do not need it.
For UPDATE trigger, you can specify a list of columns of the table. If a list of columns is specified, then if the UPDATE
statement does not change the columns with SET clauses, then the trigger is not activated at all.
In the example blow, the trigger code modifies the updated data if a condition is true. This type of trigger is useful
when the application does not perform the necessary checks and modifications to data. The statement block that starts
with BEGIN ATOMIC is similar to an SQL/PSM block and can contain all the SQL statements that are allowed in
an SQL/PSM block.
CREATE TRIGGER t BEFORE UPDATE ON customer
177
Triggers
The Java method for a synchronous trigger (see below) can modify the values in newRow2 in a BEFORE trigger.
Such modifications are reflected in the row that is being inserted or updated. Any other modifications are ignored
by the engine.
A Java trigger that uses an instance of org.hsqldb.Trigger has two forms, synchronous, or asynchronous
(immediate or queued). By default, or when QUEUE 0 is specified, the action is performed immediately by calling
the Java method. This is similar to SQL trigger actions.
When QUEUE n is specified with n larger than 0, the engine uses a separate thread to execute the Java method, using
a queue with the size n. For certain applications, such as real-time systems this allows asynchronous notifications to
be sent by the trigger event, without introducing delays in the engine. With asynchronous triggers, an extra parameter,
NOWAIT can be used in trigger definition. This overcomes the queue full condition. In this mode, old calls that are
still in the queue are discarded one by one and replaced with new calls.
Java row level triggers that are declared with BEFORE trigger action time can modify the row data. Triggers with
AFTER trigger action time can modify the database, e.g. insert new rows. If the trigger needs to access the database,
the same method as in Java Language Routines SQL/JRT can be used. The Java code should connect to the URL
"jdbc:default:connection" and use this connection to access the database.
For
sample
trigger
classes
and
test
code
see,
org.hsqldb.sample.TriggerSample,
org.hsqldb.test.TestTriggers, org.hsqldb.test.TriggerClass and the associated text script
TestTriggers.txt in the /testrun/hsqldb/ directory. In the example below, the trigger is activated only
if the update statement includes SET clauses that modify any of the specified columns (c1, c2, c3). Furthermore, the
trigger is not activated if the c2 column in the updated row is null.
CREATE TRIGGER TRIGBUR BEFORE UPDATE OF c1, c2, c3 ON testtrig
referencing NEW ROW AS newrow
FOR EACH ROW WHEN (newrow.c2 IS NOT NULL)
CALL "org.hsqldb.test.TriggerClass"
Java functions can be called from an SQL trigger. So it is possible to define the Java function to perform any external
communication that are necessary for the trigger, and use SQL code for checks and alterations to data.
CREATE TRIGGER t BEFORE UPDATE ON customer
REFERENCING NEW AS newrow FOR EACH ROW
178
Triggers
BEGIN ATOMIC
IF LENGTH(newrow.firstname) > 10 THEN
CALL my_java_function(newrow.firstname, newrow.lastname);
END IF;
END
Trigger Creation
CREATE TRIGGER
trigger definition
<trigger definition> ::= CREATE TRIGGER <trigger name> <trigger action time>
<trigger event> ON <table name> [BEFORE <other trigger name>] [ REFERENCING
<transition table or variable list> ] <triggered action>
<trigger action time> ::= BEFORE | AFTER | INSTEAD OF
<trigger event> ::= INSERT | DELETE | UPDATE [ OF <trigger column list> ]
<trigger column list> ::= <column name list>
<triggered action> ::= [ FOR EACH { ROW | STATEMENT } ] [ <triggered when
clause> ] <triggered SQL statement>
<triggered when clause> ::= WHEN <left paren> <search condition> <right paren>
<triggered SQL statement> ::= <SQL procedure statement> | BEGIN ATOMIC { <SQL
procedure statement> <semicolon> }... END | [QUEUE <integer literal>] [NOWAIT]
CALL <HSQLDB trigger class FQN>
<transition table or variable list> ::= <transition table or variable>...
<transition table or variable> ::= OLD [ ROW ] [ AS ] <old transition variable
name> | NEW [ ROW ] [ AS ] <new transition variable name> | OLD TABLE [ AS ]
<old transition table name> | NEW TABLE [ AS ] <new transition table name>
<old transition table name> ::= <transition table name>
<new transition table name> ::= <transition table name>
<transition table name> ::= <identifier>
<old transition variable name> ::= <correlation name>
<new transition variable name> ::= <correlation name>
Trigger definition is a relatively complex statement. The combination of <trigger action time> and
<trigger event> determines the type of the trigger. Examples include BEFORE DELETE, AFTER UPDATE,
INSTEAD OF INSERT. If the optional [ OF <trigger column list> ] is specified for an UPDATE trigger,
then the trigger is activated only if one of the columns that is in the <trigger column list> is specified in
the UPDATE statement that activates the trigger.
If a trigger is FOR EACH ROW, which is the default option, then the trigger is activated for each row of the table that is
affected by the execution of an SQL statement. Otherwise, it is activated once only per statement execution. For FOR
EACH ROW triggers, there is an OLD and NEW state for each row. For UPDATE triggers, both OLD and NEW states
exist, representing the row before the update, and after the update. For DELETE, triggers, there is only an OLD state.
179
Triggers
For INSERT triggers, there is only the NEW state. If a trigger is FOR EACH STATEMENT, then a transient table is
created containing all the rows for the OLD state and another transient table is created for the NEW state.
The [ REFERENCING <transition table or variable> ] is used to give a name to the OLD and NEW
data row or table. This name can be referenced in the <SQL procedure statement> to access the data.
The optional <triggered when clause> is a search condition, similar to the search condition of a DELETE or
UPDATE statement. If the search condition is not TRUE for a row, then the trigger is not activated for that row.
The <SQL procedure statement> is limited to INSERT, DELETE, UPDATE and MERGE statements.
The <HSQLDB trigger class FQN> is a delimited identifier that contains the fully qualified name of a Java
class that implements the org.hsqldb.Trigger interface.
Early releases of HyperSQL version 2.0 do not allow the use of OLD TABLE or NEW TABLE in statement level
triggers.
TRIGGERED SQL STATEMENT
triggered SQL statement
The <triggered SQL statement> has three forms.
The first form is a single SQL procedure statement. This statement can reference the OLD ROW and NEW ROW
variables. For example, it can reference these variables and insert a row into a separate table.
The second form is enclosed in a BEGIN ... END block and can include one or more SQL procedure statements. In
BEFORE triggers, you can include SET statements to modify the inserted or updated rows. In AFTER triggers, you
can include INSERT, DELETE and UPDATE statements to change the data in other database tables. SELECT and
CALL statements are allowed in BEFORE and AFTER triggers. CALL statements in BEFORE triggers should not
modify data.
The third form specifies a call to a Java method.
Two examples of a trigger with a block are given below. The block can include elements discussed in the SQLInvoked Routines chapter, including local variables, loops and conditionals. You can also raise an exception in such
blocks in order to terminate the execution of the SQL statement that caused the trigger to execute.
/* the trigger throws an exception if a customer with the given last name already exists */
CREATE TRIGGER trigone BEFORE INSERT ON customer
REFERENCING NEW ROW AS newrow
FOR EACH ROW WHEN (newrow.id > 100)
BEGIN ATOMIC
IF EXISTS (SELECT * FROM CUSTOMER WHERE CUSTOMER.LASTNAME = NEW.LASTNAME) THEN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'already exists';
END IF;
END
/* for each row inserted into the target, the trigger insert a row into the table used for
logging */
CREATE TRIGGER trig AFTER INSERT ON testtrig
BEFORE othertrigger
REFERENCING NEW ROW AS newrow
FOR EACH ROW WHEN (newrow.id > 1)
BEGIN ATOMIC
INSERT INTO triglog VALUES (newrow.id, newrow.data, 'inserted');
/* more statements can be included */
END
180
Triggers
181
Overview
HyperSQL supports a wide range of built-in functions and allows user-defined functions written in SQL and Java
languages. User-defined functions are covered in the SQL-Invoked Routines chapter. If a built-in function is not
available, you can write your own using procedural SQL or Java.
Built-in aggregate functions such as SUM, MAX, ARRAY_AGG, GROUP_CONCAT are covered in the Data Access
and Change chapter, which covers SQL in general. SQL expressions such as COALESCE, NULLIF and CAST are
also discussed there.
The built-in functions fall into three groups:
SQL Standard Functions
A wide rang of functions defined by SQL/Foundation are supported. SQL/Foundation functions that have no
parameter are called without empty parentheses. Functions with multiple parameters often use keywords instead of
commas to separate the parameters. Many functions are overloaded. Among these, some have one or more optional
parameters that can be omitted, while the return type of some functions is dependent upon the type of one of the
parameters. The usage of SQL Standard Functions (where they can be used) is covered more extensively in the
Data Access and Change chapter
JDBC Open Group CLI Functions
These functions were defined as an extension to the CLI standard, which is the basis for ODBC and JDBC and
supported by many database products. JDBC supports an escape mechanism to specify function calls in SQL
statements in a manner that is independent of the function names supported by the target database engine. For
example SELECT {fn DAYOFMONTH (dateColumn)} FROM myTable can be used in JDBC and is
translated to Standard SQL as SELECT EXTRACT (DAY_OF_MONTH FROM dateColumn) FROM myTable
if a database engine supports the Standard syntax. If a database engine does not support Standard SQL, then the
translation will be different. HyperSQL supports all the function names specified in the JDBC specifications as
native functions. Therefore, there is no need to use the {fn FUNC_NAME ( ... ) } escape with HyperSQL.
If a JDBC function is supported by the SQL Standard in a different form, the SQL Standard form is the preferred
form to use.
HyperSQL Built-In Functions
Many additional built-in functions are available for some useful operations. Some of these functions return the
current setting for the session and the database. The General Functions accept arguments of different types and
return values based on comparison between the arguments.
In the BNF specification used here, words in capital letters are actual tokens. Syntactic elements such as expressions
are enclosed in angle brackets. The <left paren> and <right paren> tokens are represented with the actual
182
Built In Functions
symbol. Optional elements are enclosed with square brackets ( <left bracket> and <right bracket> ).
Multiple options for a required element are enclosed with braces ( <left brace> and <right brace> ).
Alternative tokens are separated with the vertical bar ( <vertical bar> ). At the end of each function definition,
the standard which specifies the function is noted in parentheses as JDBC or HyperSQL, or the SQL/Foundation part
of the SQL Standard.
183
Built In Functions
The arguments are character strings or binary strings. Returns a string formed by concatenation of the arguments.
Minimum number of arguments is 2. Equivalent to the SQL concatenation expression <value expr 1> ||
<value expr 2> [ || ...] .
Handling of null values in the CONCAT function depends on the database property sql.concat_nulls ( SET
DATABASE SQL SYNTAX CONCAT NULLS { TRUE || FALSE } ). By default, any null value will cause the
function to return null. If the property is set false, then NULL values are replaced with empty strings.
(JDBC)
CONCAT_WS
CONCAT_WS ( <char value separator>, <char value expr 1>, <char value expr 2>
[, ...] )
The arguments are character strings. Returns a string formed by concatenation of the arguments from the second
argument, using the separator from the first argument. Minimum number of arguments is 3. Equivalent to the SQL
concatenation expression <value expr 1> || <separator> || <value expr 2> [ || ...] . The
function ignores null values and returns an empty string if all values are null. It returns null only if the separator is null.
This function is similar to a MySQL function of the same name.
(HyperSQL)
DIFFERENCE
DIFFERENCE ( <char value expr 1>, <char value expr 2> )
The arguments are character strings. Converts the arguments into SOUNDEX codes, and returns an INTEGER between
0-4 which indicates how similar the two SOUNDEX value are. If the values are the same, it returns 4, if the values
have no similarity, it returns 0. In-between values are returned for partial similarity. (JDBC)
INSERT
INSERT ( <char value expr 1>, <offset>, <length>, <char value expr 2> )
Returns a character string based on <char value expr 1> in which <length> characters have been removed
from the <offset> position and in their place, the whole <char value expr 2> is copied. Equivalent to SQL/
Foundation OVERLAY( <char value expr1> PLACING < char value expr2> FROM <offset>
FOR <length> ) . (JDBC)
INSTR
INSTR ( <char value expr 1>, <char value expr 2> [ , <offset> ] )
Returns as a BIGINT value the starting position of the first occurrence of <char value expr 2> within <char
value expr 1>. If <offset> is specified, the search begins with the position indicated by <offset>. If the
search is not successful, 0 is returned. Similar to the LOCATE function but the order of the arguments is reversed.
(HyperSQL)
HEXTORAW
HEXTORAW( <char value expr> )
Returns a BINARY string formed by translation of hexadecimal digits and letters in the <char value expr>.
Each character of the <char value expr> must be a digit or a letter in the A | B | C | D | E | F set. Each byte of
the retired binary string is formed by translating two hex digits into one byte. (HyperSQL)
184
Built In Functions
LCASE
LCASE ( <char value expr> )
Returns a character string that is the lower case version of the <char value expr>. Equivalent to SQL/Foundation
LOWER (<char value expr>). (JDBC)
LEFT
LEFT ( <char value expr>, <count> )
Returns a character string consisting of the first <count> characters of <char value expr>. Equivalent to SQL/
Foundation SUBSTRING(<char value expr> FROM 0 FOR <count>). (JDBC)
LENGTH
LENGTH ( <char value expr> )
Returns as a BIGINT value the number of characters in <char value expr>. Equivalent to SQL/Foundation
CHAR_LENGTH(<char value expr>). (JDBC)
LOCATE
LOCATE ( <char value expr 1>, <char value expr 2> [ , <offset> ] )
Returns as a BIGINT value the starting position of the first occurrence of <char value expr 1> within <char
value expr 2>. If <offset> is specified, the search begins with the position indicated by <offset>. If the
search is not successful, 0 is returned. Without the third argument, LOCATE is equivalent to the SQL Standard function
POSITION(<char value expr 1> IN <char value expr 2>). (JDBC)
LOWER
LOWER ( <char value expr> )
Returns a character string that is the lower case version of the <char value expr>. (Foundation)
LPAD
LPAD ( <char value expr 1>, <length> [, <char value expr 2> ] )
Returns a character string with the length of <length> characters. The string contains the characters of <char
value expr 1> padded to the left with spaces. If <length> is smaller than the length of the string argument,
the argument is truncated. If the optional <char value expr 2> is specified, this string is used for padding,
instead of spaces. (HyperSQL)
LTRIM
LTRIM ( <char value expr> )
Returns a character string based on <char value expr> with the leading space characters removed. Equivalent
to SQL/Foundation TRIM( LEADING ' ' FROM <char value expr> ). (JDBC)
OCTET_LENGTH
OCTET_LENGTH ( <string value expression> )
The OCTET_LENGTH function can be used with character or binary strings.
185
Built In Functions
Return a BIGINT value that measures the length of the string in octets. When used with character strings,
the octet count is returned, which is twice the normal length. (Foundation)
OVERLAY
OVERLAY ( <char value expr 1> PLACING <char value expr 2>
FROM <start position> [ FOR <string length> ] [ USING CHARACTERS ] )
OVERLAY ( <binary value expr 1> PLACING <binary value expr 2>
FROM <start position> [ FOR <string length> ] )
The character version of OVERLAY returns a character string based on <char value expr 1> in which <string
length> characters have been removed from the <start position> and in their place, the whole <char
value expr 2> is copied.
The binary version of OVERLAY returns a binary string formed in the same manner as the character version.
(Foundation)
POSITION
POSITION ( <char value expr 1> IN <char value expr 2> [ USING CHARACTERS ] )
POSITION ( <binary value expr 1> IN <binary value expr 2> )
The character and binary versions of POSITION search the string value of the second argument for the first occurrence
of the first argument string. If the search is successful, the position in the string is returned as a BIGINT. Otherwise
zero is returned. (Foundation)
RAWTOHEX
RAWTOHEX( <binary value expr> )
Returns a character string composed of hexadecimal digits representing the bytes in the <binary value expr>.
Each byte of the <binary value expr> is translated into two hex digits. (HyperSQL)
REGEXP_MATCHES
REGEXP_MATCHES ( <char value expr>, <regular expression> )
Returns true if the <char value expr> matches the <regular expression> as a whole. The <regular
expression> is defined according to Java language regular expression rules. (HyperSQL)
REGEXP_REPLACE
REGEXP_REPLACE ( <char value expr 1>, <regular expression>, <char value expr 3> )
Replaces <char value expr 1> regions that match the <regular expression> with <char value expr
3>. The <regular expression> is defined according to Java language regular expression rules. (HyperSQL)
REGEXP_SUBSTRING
REGEXP_SUBSTRING ( <char value expr>, <regular expression> )
Returns the first region in the <char value expr> that matches the <regular expression>. The <regular
expression> is defined according to Java language regular expression rules. (HyperSQL)
186
Built In Functions
REGEXP_SUBSTRING_ARRAY
REGEXP_SUBSTRING_ARRAY ( <char value expr>, <regular expression> )
Returns all the regions in the the <char value expr> that match the <regular expression>. The
<regular expression> is defined according to Java language regular expression rules. Returns an array
containing the matching regions (HyperSQL)
REPEAT
REPEAT ( <char value expr>, <count> )
Returns a character string based on <char value expr>, repeated <count> times. (JDBC)
REPLACE
REPLACE ( <char value expr 1>, <char value expr 2> [, <char value expr 3> ] )
Returns a character string based on <char value expr 1> where each occurrence of <char value expr
2> has been replaced with a copy of <char value expr 3>. If the function is called with just two arguments, the
<char value expr 3> defaults to the empty string and calling the function simply removes the occurrences of <char
value expr 2> from the first string.(JDBC)
REVERSE
REVERSE ( <char value expr> )
Returns a character string based on <char value expr> with characters in the reverse order. (HyperSQL)
RIGHT
RIGHT ( <char value expr>, <count> )
Returns a character string consisting of the last <count> characters of <char value expr>. (JDBC)
RPAD
RPAD ( <char value expr 1>, <length> [, <char value expr 2> ] )
Returns a character string with the length of <length> characters. The string begins with the characters of <char
value expr 1> padded to the right with spaces. If <length> is smaller than the length of the string argument,
the argument is truncated. If the optional <char value expr 2> is specified, this string is used for padding,
instead of spaces. (HyperSQL)
RTRIM
RTRIM ( <char value expr> )
Returns a character string based on <char value expr> with the trailing space characters removed. Equivalent
to SQL/Foundation TRIM(TRAILING ' ' FROM <character string>). (JDBC)
SOUNDEX
SOUNDEX ( <char value expr> )
Returns a four character code representing the sound of <char value expr>. The US census algorithm is used.
For example the soundex value for Washington is W252. (JDBC)
187
Built In Functions
SPACE
SPACE ( <count> )
Returns a character string consisting of <count> spaces. (JDBC)
SUBSTR
{ SUBSTR | SUBSTRING } ( <char value expr>, <offset>, <length> )
The JDBC version of SQL/Foundation SUBSTRING returns a character string that consists of <length> characters
from <char value expr> starting at the <offset> position. (JDBC)
SUBSTRING
SUBSTRING ( <char value expr> FROM <start position> [ FOR <string length> ]
[ USING CHARACTERS ] )
SUBSTRING ( <binary value expr> FROM <start position> [ FOR <string length> ] )
The character version of SUBSTRING returns a character string that consists of the characters of the <char value
expr> from <start position>. If the optional <string length> is specified, only <string length>
characters are returned.
The binary version of SUBSTRING returns a binary string in the same manner. (Foundation)
TRIM
TRIM ([ [ LEADING | TRAILING | BOTH ] [ <trim character> ] FROM ] <char value
expr> )
TRIM ([ [ LEADING | TRAILING | BOTH ] [ <trim octet> ] FROM ] <binary value expr> )
The character version of TRIM returns a character string based on <char value expr>. Consecutive instances
of <trim character> are removed from the beginning, the end or both ends of the<char value expr>
depending on the value of the optional first qualifier [ LEADING | TRAILING | BOTH ]. If no qualifier
is specified, BOTH is used as default. If [ <trim character> ] is not specified, the space character is used
as default.
The binary version of TRIM returns a binary string based on <binary value expr>. Consecutive instances of
<trim octet> are removed in the same manner as in the character version. If [ <trim octet> ] is not
specified, the 0 octet is used as default. (Foundation)
TRANSLATE
TRANSLATE( <char value expr1>, <char value expr2>, <char value expr3> )
Returns a character string based on <char value expr1> source. Each character of the source is checked against
the characters in <char value expr2>. If the character is not found, it is not modified. If the character is found,
then the character in the same position in <char value expr3> is used. If <char value expr2> is longer
than <char value expr3>, then those characters at the end that have no counterpart in <char value expr3>
are dropped from the result. (HyperSQL)
-- in this example any accented character in acolumn is replaced with one without an accent
TRANSLATE( acolumn, '',
'ACEIOUAEIOUAEIOUAOEUaceiouaeiouaeiouaoeu');
188
Built In Functions
UCASE
UCASE ( <char value expr> )
Returns a character string that is the upper case version of the <char value expr>. Equivalent to SQL/Foundation
UPPER( <char value expr> ) . (JDBC)
UPPER
UPPER ( <char value expr> )
Returns a character string that is the upper case version of the <char value expr> . (Foundation)
Numeric Functions
ABS
ABS ( <num value expr> | <interval value expr> )
Returns the absolute value of the argument as a value of the same type. (JDBC and Foundation)
ACOS
ACOS ( <num value expr> )
Returns the arc-cosine of the argument in radians as a value of DOUBLE type. (JDBC)
ASIN
ASIN ( <num value expr> )
Returns the arc-sine of the argument in radians as a value of DOUBLE type. (JDBC)
ATAN
ATAN ( <num value expr> )
Returns the arc-tangent of the argument in radians as a value of DOUBLE type. (JDBC)
ATAN2
ATAN2 ( <num value expr 1>, <num value expr 2> )
The <num value expr 1> and <num value expr 2> express the x and y coordinates of a point. Returns
the angle, in radians, representing the angle coordinate of the point in polar coordinates, as a value of DOUBLE type.
(JDBC)
CEILING
{ CEIL | CEILING } ( <num value expr> )
Returns the smallest integer greater than or equal to the argument. If the argument is exact numeric then the result is
exact numeric with a scale of 0. If the argument is approximate numeric, then the result is of DOUBLE type. (JDBC
and Foundation)
BITAND
BITAND ( <num value expr 1>, <num value expr 2> )
189
Built In Functions
190
Built In Functions
191
Built In Functions
ROUND
ROUND ( <num value expr>, <int value expr> )
The <num value expr> is of the DOUBLE type or DECIMAL type. The function returns a DOUBLE or
DECIMAL value which is the value of the argument rounded to <int value expr> places right of the decimal
point. If <int value expr> is negative, the first argument is rounded to <int value expr> places to the
left of the decimal point.
This function rounds values ending with .5 or larger away from zero for DECIMAL arguments and results. When
the value ends with .5 or larger and the argument and result are DOUBLE, It rounds the value towards the closest
even value.
The datetime version is discussed in the next section. (JDBC)
SIGN
SIGN ( <num value expr> )
Returns an INTEGER, indicating the sign of the argument. If the argument is negative then -1 is returned. If it is equal
to zero then 0 is returned. If the argument is positive then 1 is returned. (JDBC)
SIN
SIN ( <num value expr> )
Returns the sine of the argument (an angle expressed in radians) as a value of DOUBLE type. (JDBC)
SQRT
SQRT ( <num value expr> )
Returns the square root of the argument as a value of DOUBLE type. (JDBC and Foundation)
TAN
TAN ( <num value expr> )
Returns the tangent of the argument (an angle expressed in radians) as a value of DOUBLE type. (JDBC)
TO_NUMBER
TO_NUMBER ( <char value expr> )
Performs a cast from character to DECIMAL number. The character string must consist of digits and can have a
decimal point. Use the SQL Standard CAST expression instead of this non-standard function. (HyperSQL)
TRUNC
TRUNC ( <num value expr> [, <int value expr>] )
This is a similar to the TRUNCATE function when the first argument is numeric. If the second argument is omitted,
zero is used in its place.
The datetime version is discussed in the next section. (HyperSQL)
TRUNCATE
192
Built In Functions
193
Built In Functions
SESSIONTIMEZONE()
Same as SESSION_TIMEZONE. (HyperSQL)
DATABASE_TIMEZONE
DATABASE_TIMEZONE()
Returns the time zone for the database engine. This is based on where the database server process is located. Returns
an INTERVAL HOUR TO MINUTE value. (HyperSQL)
DBTIMEZONE
DBTIMEZONE()
Same as DATABASE_TIMEZONE. (HyperSQL)
194
Built In Functions
CURTIME
CURTIME ()
This function is equivalent to LOCALTIME. (JDBC)
SYSDATE
SYSDATE
This function is equivalent to LOCALTIMESTAMP. (HyperSQL)
SYSTIMESTAMP
SYSTIMESTAMP
This no-arg function is equivalent to CURRENT_TIMESTAMP and is enabled in ORA syntax mode only. (HyperSQL)
TODAY
TODAY
This no-arg function is equivalent to CURRENT_DATE. (HyperSQL)
195
Built In Functions
The <datetime value expr> is of DATE or TIMESTAMP type. This function returns the DAY number since
the first day of the calendar. The first day is numbered 1. (HyperSQL)
HOUR
HOUR ( <datetime value expr> )
This function is equivalent to EXTRACT ( HOUR FROM ... ) Returns an integer value in the range of 0-23.
(JDBC)
MINUTE
MINUTE ( <datetime value expr> )
This function is equivalent to EXTRACT ( MINUTE FROM ... ) Returns an integer value in the range of
0 - 59. (JDBC)
MONTH
MONTH ( <datetime value expr> )
This function is equivalent to EXTRACT ( MONTH FROM ... ) Returns an integer value in the range of 1-12.
(JDBC)
MONTHNAME
MONTHNAME ( <datetime value expr> )
This function is equivalent to EXTRACT ( NAME_OF_MONTH FROM ... ) Returns a string in the range of
January - December. (JDBC)
QUARTER
QUARTER ( <datetime value expr> )
This function is equivalent to EXTRACT ( QUARTER FROM ... ) Returns an integer in the range of 1 - 4. (JDBC)
SECOND
SECOND ( <datetime value expr> )
This function is equivalent to EXTRACT ( SECOND FROM ... ) Returns an integer or decimal in the range of
0 - 59, with the same precision as the <datetime value expr>. (JDBC)
SECONDS_SINCE_MIDNIGHT
SECONDS_SINCE_MIDNIGHT ( <datetime value expr> )
This function is equivalent to EXTRACT ( SECONDS_SINCE_MIDNIGHT FROM ... ) Returns an integer
in the range of 0 - 86399. (HyperSQL)
UNIX_MILLIS
UNIX_MILLIS ( [ <datetime value expression> ] )
This function returns a BIGINT value. With no parameter, it returns the number of milliseconds since 1970-01-01.
With a DATE or TIMESTAMP parameter, it converts the argument into number of milliseconds since 1970-01-01.
(HyperSQL)
196
Built In Functions
UNIX_TIMESTAMP
UNIX_TIMESTAMP ( [ <datetime value expression> ] )
This function returns a BIGINT value. With no parameter, it returns the number of seconds since 1970-01-01.
With a DATE or TIMESTAMP parameter, it converts the argument into number of seconds since 1970-01-01. The
TIMESTAMP ( <num value expression> function returns a TIMESTAMP from a Unix timestamp. (HyperSQL)
WEEK
WEEK ( <datetime value expr> )
This function is equivalent to EXTRACT ( WEEK_OF_YEAR FROM ... ) Returns an integer in the range
of 1 - 54. (JDBC)
YEAR
YEAR ( <datetime value expr> )
This function is equivalent to EXTRACT ( YEAR FROM ... ) Returns an integer in the range of 1 - 9999. (JDBC)
EXTRACT
EXTRACT ( <extract field> FROM <extract source> )
<extract field> ::= YEAR | MONTH | DAY | HOUR | MINUTE | DAY_OF_WEEK | WEEK_OF_YEAR
| QUARTER | DAY_OF_YEAR | DAY_OF_MONTH |
TIMEZONE_HOUR | TIMEZONE_MINUTE | SECOND | SECONDS_SINCE_MIDNIGHT |
DAY_NAME | MONTH_NAME
<extract source> ::= <datetime value expr> | <interval value expr>
The EXTRACT function returns a field or element of the <extract source>. The <extract source> is a
datetime or interval expression. The type of the return value is BIGINT for most of the <extract field> options.
The exceptions is SECOND where a DECIMAL value is returned which has the same precision as the datetime or
interval expression. The field values DAY_NAME or MONTH_NAME result in a character string. When MONTH_NAME
is specified, a string in the range January - December is returned. When DAY_NAME is specified, a string in the range
Sunday -Saturday is returned.
If the <extract source> is FROM <datetime value expr>, different groups of <extract source> can
be used depending on the data type of the expression. The TIMEZONE_HOUR | TIMEZONE_MINUTE options are
valid only for TIME WITH TIMEZONE and TIMESTAMP WITH TIMEZONE data types. The HOUR | MINUTE
| SECOND | SECONDS_MIDNIGHT options, are valid for TIME and TIMESTAMP types. The rest of the fields
are valid for DATE and TIMESTAMP types.
If the <extract source> is FROM <interval value expr>, the <extract field> must be one of the
fields of the INTERVAL type of the expressions. The YEAR | MONTH options may be valid for INTERVAL types
based on months. The DAY | HOUR | MINUTE | SECOND | SECONDS_MIDNIGHT options may be valid
for INTERVAL types based on seconds. For example, DAY | HOUR | MINUTE are the only valid fields for the
INTERVAL DAY TO MINUTE data type. (Foundation with HyperSQL extensions)
197
Built In Functions
LAST_DAY
LAST_DAY ( <datetime value expr> )
Returns the last day of the month for the given <datetime value expr>. The returned value preserves the year,
month, hour, minute and secend fields of the timestamp. The type of the result is always TIMESTAMP(0). (HyperSQL)
VALUES LAST_DAY ( TIMESTAMP '2012-02-14 12:30:44')
C1
------------------2012-02-29 12:30:44
MONTHS_BETWEEN
MONTHS_BETWEEN ( <datetime value expr1> , <datetime value expr2> )
Returns the a number (not an INTERVAL) possibly with a fraction, representing the number of months between two
days. If both dates have the same day of month, or are on the last day of the month, the result is an exact numeric.
Otherwise, the fraction is calculated base on 31 days per month. You can cast the resulting value into INTERVAL
MONTH and use it for datetime arithmetic. (HyperSQL)
VALUES MONTHS_BETWEEN ( TIMESTAMP '2013-02-14 12:30:44', TIMESTAMP '2012-01-04 12:30:44')
C1
----------------------------------13.32258064516129000000000000000000
TIMESTAMPADD
TIMESTAMPADD ( <tsi datetime field>, <numeric value expression>, <datetime value
expr>)
TIMESTAMPDIFF
TIMESTAMPDIFF ( <tsi datetime field>, <datetime value expr 1>, <datetime value
expr 2>)
<tsi datetime field> ::= SQL_TSI_FRAC_SECOND | SQL_TSI_MILLI_SECOND |
SQL_TSI_SECOND | SQL_TSI_MINUTE | SQL_TSI_HOUR | SQL_TSI_DAY | SQL_TSI_WEEK |
SQL_TSI_MONTH | SQL_TSI_QUARTER | SQL_TSI_YEAR
198
Built In Functions
HyperSQL supports full SQL Standard datetime features. It supports adding integers representing units of time directly
to datetime values using the arithmetic plus operator. It also supports subtracting one <datetime value expr>
from another in the given units of date or time using the minus operator. An example of <datetime value expr>
+ <numeric value expression> <datetime field> is LOCALTIMESTAMP + 5 DAY. An example
of ( <datetime value expr> - <numeric value expression> ) <datetime field> is
(CURRENT_DATE - DATE '2008-08-8') MONTH which returns the number of calendar months between
the two dates.
The two JDBC functions, TIMESTAMPADD and TIMESTAMPDIFF perform a similar function to the above SQL
expressions. The <tsi datetime field> names are keywords and are different from those used in the EXTRACT
functions. These names are valid for use only when calling these two functions. With TIMESTAMPDIFF, the names
indicate the unit of time used to compute the difference between two datetime fields. With TIMESTAMPADD they
represent the unit of time used for the <numeric value expression>. The unit of time for each name is self-explanatory.
In the case of SQL_TSI_FRAC_SECOND, the unit is nanosecond.
The return value for TIMESTAMPADD is of the same type as the datetime argument used. The return type
for TIMESTAMPDIFF is always BIGINT, regardless of the type of arguments. The two datetime arguments of
TIMESTAMPDIFF should be of the same type. The TIME type is not supported for the arguments to these functions.
TIMESTAMPDIFF is evaluated as <datetime value expr 2> - <datetime value expr 1>. (JDBC)
TIMESTAMPADD ( SQL_TSI_MONTH, 3, DATE '2008-11-22' )
TIMESTAMPDIFF ( SQL_TSI_HOUR, TIMESTAMP '2008-11-20 20:30:40', TIMESTAMP '2008-11-21 21:30:40' )
DATE_ADD
DATE_ADD ( <datetime value expr> , <interval value expr> )
DATE_SUB
DATE_SUB ( <datetime value expr> , <interval value expr> )
These functions are equivalent to arithmetic addition and subtraction, <datetime value expr> + <interval value expr>
and <datetime value expr> - <interval value expr>. The functions are provided for compatibility with other databases.
The supported interval units are the standard SQL interval unit listed in other chapters of this guide. The TIME type
is supported for the argument to these functions. (HyperSQL)
DATE_ADD ( DATE '2008-11-22', INTERVAL 3 MONTH )
DATE_SUB ( TIMESTAMP '2008-11-22 20:30:40', INTERVAL 20 HOUR )
DATEADD
DATEADD ( <field>, <numeric value expr>, <datetime value expr> )
DATEDIFF
DATEDIFF ( <field>, <datetime value expr 1>, <datetime value expr 2> )
<field> ::= 'yy' | 'year' | 'mm' | 'month' | 'dd' | 'day' | 'hh' | 'hour' |
'mi' | 'minute' | 'ss' | 'second' | 'ms' | 'millisecond'
The DATEADD and DATEDIFF functions are alternatives to TIMESTAMPADD and TIMESTAMPDIFF, with fewer
available field options. The field names are specified as strings, rather than keywords. The fields translate to YEAR,
MONTH, DAY, HOUR, MINUTE, SECOND and MILLISECOND. DATEDIFF is evaluated as <datetime value expr
2> - <datetime value expr 1>. (HyperSQL}
199
Built In Functions
ROUND
ROUND ( <datetime value expr> [ , <char value expr> ] )
The <datetime value expr> is of DATE, TIME or TIMESTAMP type. The <char value expr> is a
format string for YEAR, MONTH, WEEK OF YEAR, DAY, HOUR, MINUTE or SECOND as listed in the table for
TO_CHAR and TO_DATE format elements (see below). The datetime value is rounded up or down after the specified
field and the rest of the fields to the right are set to one for MONTH and DAY, or zero, for the rest of the fields.
For example rounding a timestamp value on the DAY field results in midnight the same date or midnight the next
day if the time is at or after 12 noon. If the second argument is omitted, the datetime value is rounded to the nearest
day. (HyperSQL)
TRUNC
TRUNC ( <datetime value expr> [ , <char value expr> ] )
Similar to the ROUND function, the <num value expr> is of DATE, TIME or TIMESTAMP type. The <char
value expr> is a format string (such as 'YY' or 'MM') for YEAR, MONTH, WEEK OF YEAR, DAY, HOUR,
MINUTE or SECOND as listed in the table for TO_CHAR and TO_DATE format elements (see below). The datetime
value is truncated after the specified field and the rest of the fields to the right are set to one for MONTH and DAY, or
zero, for the rest of the fields. For example applying TRUNC to a timestamp value on the DAY field results in midnight
the same date. Examples of ROUND and TRUNC functions are given below. If the second argument is omitted, the
datetime value is truncated to midnight the same date. (HyperSQL)
ROUND ( TIMESTAMP'2008-08-01 20:30:40', 'YYYY' )
TIMESTAMP '2009-01-01 00:00:00'
TRUNC ( TIMESTAMP'2008-08-01 20:30:40', 'YYYY' )
TIMESTAMP '2008-01-01 00:00:00'
200
Built In Functions
When the single argument is a formatted date or timestamp string, it is translated to a TIMESTAMP.
When two arguments are used, the first argument is the date part and the second argument is the time part of the
returned TIMESTAMP value. An example, including the result, is given below:
TIMESTAMP ( '2008-11-22', '20:30:40' )
TIMESTAMP '2008-11-22 20:30:40.000000'
TIMESTAMP_WITH_ZONE
TIMESTAMP_WITH_ZONE ( <num value expr> )
TIMESTAMP_WITH_ZONE ( <char value expr> )
This function translates the arguments into a TIMESTAMP WITH TIME ZONE value.
When the single argument is a numeric value, it is interpreted as a Unix timestamp in seconds.
When the single argument is TIMESTAMP, it is converted to TIMESTAMP WITH TIME ZONE.
The time zone of the returned value is the local time zone at the time of the timestamp argument. This accounts for
daylight saving times. For example, if the local time zone was +4:00 at the time of the given Unix timestamp, the
returned value is local timestamp at the time with time zone +4:00.
TO_CHAR
TO_CHAR ( <datetime value expr>, <char value expr> )
This function formats a datetime or numeric value to the format given in the second argument. The format string can
contain pattern elements from the list given below, plus punctuation and space characters. An example, including the
result, is given below:
TO_CHAR ( TIMESTAMP'2008-02-01 20:30:40', 'YYYY BC MONTH, DAY HH' )
2008 AD February, Friday 8
TO_CHAR ( TIMESTAMP'2008-02-01 20:30:40', '"The Date is" YYYY BC MONTH, DAY HH' )
The Date is 2008 AD February, Friday 8
201
Built In Functions
characters. The pattern should contain all the necessary fields to construct a date, including, year, month, day of month,
etc. The returned timestamp can then be cast into DATE or TIME types if necessary. An example, including the result,
is given below:
TO_TIMESTAMP ( '22/11/2008 20:30:40', 'DD/MM/YYYY HH:MI:SS' )
TIMESTAMP '2008-11-22 20:30:40.000000'
The format strings that can be used for TO_DATE and TO_TIMESTAMP are more restrictive than those used for
TO_CHAR, because the format string must contain the elements needed to build a full DATE or TIMESTAMP value.
For example, you cannot use the 'WW', 'W', 'HH' or 'HH12' format elements with TO_DATE or TO_TIMESTAMP
The format is internally translated to a java.text.SimpleDateFormat format string. Unsupported format
strings should not be used. With TO_CHAR, you can include a string literal inside the format string by enclosing it
in double quotes. (HyperSQL)
The supported format components are all uppercase as follows:
RRRR
4-digit year
YYYY
4-digit year
IYYY
4-digit year, corresponding to ISO week of the year. The reported year for the last
few days of the calendar year may be the next year.
YY
2 digit year
IY
MM
Month (01-12)
MON
MONTH
Name of month
WW
Week of year (1-53) where week 1 starts on the first day of the year and continues
to the seventh day of the year (not a calendar week).
Week of month (1-5) where week 1 starts on the first day of the month and ends
on the seventh (not a calendar week).
IW
Week of year (1-52 or 1-53) based on the ISO standard. Week starts on Monday.
The first week may start near the end of previous year.
DAY
Name of day.
DD
DDD
DY
HH
HH12
HH24
MI
Minute (00-59).
SS
Second (00-59).
FF
Fractional seconds.
202
Built In Functions
Array Functions
Array functions are specialised functions with ARRAY parameters or return values. For the ARRAY_AGG aggregate
function, see the Data Access and Change chapter.
CARDINALITY
CARDINALITY( <array value expr> )
Returns the element count for the given array argument. (Foundation)
MAX_CARDINALITY
MAX_CARDINALITY( <array value expr> )
Returns the maximum allowed element count for the given array argument. (Foundation)
POSITION_ARRAY
POSITION_ARRAY( <value expression> IN <array value expr> [ FROM <int value
expr> ] )
Returns the position of the first match for the <value expression> in the array. By default the search starts from
the beginning of the array. The optional <int value expr> specifies the start position. Positions are counted
from 1. Returns zero if no match is found. (HyperSQL)
SORT_ARRAY
SORT_ARRAY( <array value expr> [ { ASC | DESC } ] [ NULLS { FIRST | LAST } ] )
Returns a sorted copy of the array. By default, sort is performed in ascending order and NULL elements are sorted
first. (HyperSQL)
TRIM_ARRAY
TRIM_ARRAY( <array value expr>, <num value expr> )
Returns a new array that contains the elements of the <array value expr> minus the number of elements
specified by the <num value expr>. Elements are discarded from the end of the array. (Foundation)
SEQUENCE_ARRAY
SEQUENCE_ARRAY( <value expr 1>, <value expr 2>, <value expr 3 )
Returns a new array that contains a sequence of values. The <value expr 1> is the lower bound of the range.
The <value expr 2> is the upper bound of the range. The <value expr 3> is the increment. The elments of
the array are within the inclusive range. The first element is <value expr 1> and each subsequent element is the
sum of the previous element and the increment. If the increment is zero, only the first element is returned. When the
increment is negative, the lower bound should be larger than the upper bound. The type of the arguments can be all
number types, or a datetime range and an interval for the third argument (HyperSQL)
In the examples below, a number sequence and a date sequence are shown. The UNNEST table expression is used
to form a table from the array.
SEQUENCE_ARRAY(0, 100, 5)
ARRAY[0,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100]
203
Built In Functions
I
1
2
3
4
5
6
7
General Functions
General functions can take different types of arguments. Some General Functions accept a variable number of
arguments.
Also see the Data Access and Change chapter for SQL expressions that are similar to functions, for example CAST
and NULLIF.
CASEWHEN
CASEWHEN( <boolean value expr>, <value expr 2>, <value expr 3> )
If the <boolean value expr> is true, returns <value expr 2> otherwise returns <value expr 3>.
Use a CASE WHEN expression instead for more extensive capabilities and options.
CASE WHEN is documented in the Data Access and Change chapter. (HyperSQL)
COALESCE
COALESCE( <value expr 1>, <value expr 2> [, ...] )
Returns <value expr 1> if it is not null, otherwise returns <value expr 2> if not null and so on. The type
of both arguments must be comparable. (Foundation)
CONVERT
CONVERT ( <value expr> , <data type> )
<data type> ::= { SQL_BIGINT | SQL_BINARY | SQL_BIT |SQL_BLOB | SQL_BOOLEAN
| SQL_CHAR | SQL_CLOB | SQL_DATE | SQL_DECIMAL | SQL_DATALINK |SQL_DOUBLE |
SQL_FLOAT | SQL_INTEGER | SQL_LONGVARBINARY | SQL_LONGNVARCHAR | SQL_LONGVARCHAR
| SQL_NCHAR | SQL_NCLOB | SQL_NUMERIC | SQL_NVARCHAR | SQL_REAL | SQL_ROWID
| SQL_SQLXML | SQL_SMALLINT | SQL_TIME | SQL_TIMESTAMP | SQL_TINYINT |
SQL_VARBINARY | SQL_VARCHAR} [ ( <precision, length or scale parameters> ) ]
The CONVERT function is a JDBC escape function, equivalent to the SQL standard CAST expression. It converts
the <value expr> into the given <data type> and returns the value. The <data type> options are synthetic
names made by prefixing type names with SQL_. Some of the <data type> options represent valid SQL types,
but some are based on non-standard type names, namely { SQL_LONGNVARCHAR | SQL_LONGVARBINARY |
204
Built In Functions
SQL_LONGVARCHAR | SQL_TINYINT }. None of the synthetic names can be used in any other context than
the CONVERT function.
The definition of CONVERT in the JDBC Standard does not allow the precision, scale or length to be specified. This
is required by the SQL standard for BINARY, BIT, BLOB, CHAR, CLOB, VARBINARY and VARCHAR types and
is often needed for DECIMAL and NUMERIC. Defaults are used for precision.
HyperSQL also allows the use of real type names (without the SQL_ prefix). In this usage, HyperSQL allows the use
of precision, scale or length for the type definition when they are valid for the type definition.
When MS SQL Server compatibility mode is on, the parameters of CONVERT are switched and only the real type
names with required precision, scale or length are allowed. (JDBC)
DECODE
DECODE( <value expr main>, <value expr match 1>, <value expr result 1> [...,]
[, <value expr default>] )
DECODE takes at least 3 arguments. The <value expr main> is compared with <value expr match 1>
and if it matches, <value expr result 1> is returned. If there are additional pairs of <value expr match
n> and <value expr result n>, comparison is repeated until a match is found the result is returned. If no
match is found, the <value expr default> is returned if it is specified, otherwise NULL is returned. The type
of the return value is a combination of the types of the <value expr result ... > arguments. (HyperSQL)
GREATEST
GREATEST( <value expr 1>, [<value expr ...>, ...] )
The GREATEST function takes one or more arguments. It compares the arguments with each other and returns the
greatest argument. The return type is the combined type of the arguments. Arguments can be of any type, so long as
they are comparable. (HyperSQL)
IFNULL
ISNULL
IFNULL | ISNULL ( <value expr 1>, <value expr 2> )
Returns <value expr 1> if it is not null, otherwise returns <value expr 2>. The type of the return value is
the type of <value expr 1>. Almost equivalent to SQL Standard COALESCE(<value expr 1>, <value
expr 2>) function, but without type modification. (JDBC)
LEAST
LEAST( <value expr 1>, [<value expr ...>, ...] )
The LEAST function takes one or more arguments. It compares the arguments with each other and returns the smallest
argument. The return type is the combined type of the arguments. Arguments can be of any type, so long as they are
comparable. (HyperSQL)
LOAD_FILE
LOAD_FILE ( <char value expr 1> [, <char value expr 2>] )
Returns a BLOB or CLOB containing the URL or file path specified in the first argument. If used with a single
argument, the function returns a BLOB. If used with two arguments, the function returns a CLOB and the second
argument is the character encoding of the file.
205
Built In Functions
The file path is interpreted the same way as a TEXT TABLE source file location. The hsqldb.allow_full_path
system property must be set true in order to access files outside the directory structure of the database files.
(HyperSQL)
NULLIF
NULLIF( <value expr 1>, <value expr 2> )
Returns <value expr 1> if it is not equal to <value expr 2>, otherwise returns null. The type of both
arguments must be the same. This function is a shorthand for a specific CASE expression. (Foundation)
NVL
NVL( <value expr 1>, <value expr 2> )
Returns <value expr 1> if it is not null, otherwise returns <value expr 2>. The type of the return value is
the type of <value expr 1>. For example, if <value expr 1> is an INTEGER column and <value expr
2> is a DOUBLE constant, the return type is cast into INTEGER. This function is similar to IFNULL. (HyperSQL)
NVL2
NVL2( <value expr 1>, <value expr 2>, <value expr 3> )
If <value expr 1> is not null, returns <value expr 2>, otherwise returns <value expr 3>. The type of
the return value is the type of <value expr 2> unless it is null. (HyperSQL)
UUID
UUID ( [ { <char value expr> | <binary value expr> ] } )
With no parameter, this function returns a new UUID value as a 16 byte binary value. With a UUID hexadecimal string
argument, it returns the 16 byte binary value of the UUID. With a 16 byte binary argument, it returns the formatted
UUID character representation. (HyperSQL)
System Functions
CRYPT_KEY
CRYPT_KEY( <value expr 1>, <value expr 2> )
Returns a binary string representation of a cryptography key for the given cipher and cryptography provider. The
cipher specification is specified by <value expr 1> and the provider by <value expr 2>. To use the default
provider, specify null for <value expr 2>. (HyperSQL)
DIAGNOSTICS
DIAGNOSTICS ( ROW_COUNT )
This is a convenience function for use instead of the GET DIAGNOSTICS ... statement. The argument specifies
the name of the diagnostics variable. Currently the only supported variable is the ROW_COUNT variable. The function
returns the row count returned by the last executed statement. The return value is 0 after most statements. Calling this
function immediately after executing an INSERT, UPDATE, DELETE or MERGE statement returns the row count
for the last statement, as it is returned by the JDBC statement. (HyperSQL)
IDENTITY
206
Built In Functions
IDENTITY ()
Returns the last IDENTITY value inserted into a row by the current session. The statement, CALL IDENTITY() can be
made after an INSERT statement that inserts a row into a table with an IDENTITY column. The CALL IDENTITY()
statement returns the last IDENTITY value that was inserted into a table by the current session. Each session manages
this function call separately and is not affected by inserts in other sessions. The statement can be executed as a direct
statement or a prepared statement. (HyperSQL)
DATABASE
DATABASE ()
Returns the file name (without directory information) of the database. (JDBC)
DATABASE_NAME
DATABASE_NAME ()
Returns the database name. This name is a 16 character, uppercase string. It is generated as a string based on the
timestamp of the creation of the database, for example HSQLDB32438AEAFB. The name can be redefined by an
admin user but the new name must be all uppercase and 16 characters long. This name is used in log messages with
external logging frameworks. (HyperSQL)
DATABASE_VERSION
DATABASE_VERSION ()
Returns the full version string for the database engine. For example, 2.0.1. (JDBC)
USER
USER ()
Equivalent to the SQL function CURRENT_USER. (JDBC)
CURRENT_USER
CURRENT_USER
CURRENT_ROLE
CURRENT_ROLE
SESSION_USER
SESSION_USER
SYSTEM_USER
SYSTEM_USER
CURRENT_SCHEMA
CURRENT_SCHEMA
CURRENT_CATALOG
CURRENT_CATALOG
207
Built In Functions
These functions return the named current session attribute. They are all SQL Standard functions.
The CURRENT_USER is the user that connected to the database, or a user subsequently set by the SET
AUTHORIZATION statement.
SESSION_USER is the same as CURRENT_USER
SYSTEM_USER is the user that connected to the database. It is not changed with any command until the session
is closed.
CURRENT_SCHEMA is default schema of the user, or a schema subsequently set by the SET SCHEMA command.
CURRENT_CATALOG is always the same within a given HyperSQL database and indicates the name of the catalog.
IS_AUTOCOMMIT
IS_AUTOCOMMIT()
Returns TRUE if the session is in autocommit mode. (HyperSQL)
IS_READONLY_SESSION
IS_READONLY_SESSION()
Returns TRUE if the session is in read only mode. (HyperSQL)
IS_READONLY_DATABASE
IS_READONLY_DATABASE()
Returns TRUE if the database is a read only database. (HyperSQL)
IS_READONLY_DATABASE_FILES
IS_READONLY_DATABASE_FILES()
Returns TRUE if the database is a read-only files database. In this kind of database, it is possible to modify the data,
but the changes are not persisted to the database files. (HyperSQL)
ISOLATION_LEVEL
ISOLATION_LEVEL()
Returns the current transaction isolation level for the session. Returns either READ COMMITTED or
SERIALIZABLE as a string. (HyperSQL)
SESSION_ID
SESSION_ID()
Returns the id of the session as a BIGINT value. Each session id is unique during the operational lifetime of the
database. Id's are restarted after a shutdown and restart. (HyperSQL)
SESSION_ISOLATION_LEVEL
SESSION_ISOLATION_LEVEL()
Returns the default transaction isolation level for the current session. Returns either READ COMMITTED or
SERIALIZABLE as a string. (HyperSQL)
208
Built In Functions
DATABASE_ISOLATION_LEVEL
DATABASE_ISOLATION_LEVEL()
Returns the default transaction isolation level for the database. Returns either READ COMMITTED or
SERIALIZABLE as a string. (HyperSQL)
TRANSACTION_SIZE
TRANSACTION_SIZE()
Returns the row change count for the current transaction. Each row change represents a row INSERT or a row DELETE
operation. There will be a pair of row change operations for each row that is updated.
TRANSACTION_ID
TRANSACTION_ID()
Returns the current transaction ID for the session as a BIGINT value. The database maintains a global incremental
id which is allocated to new transactions and new actions (statement executions) in different sessions. This value is
unique to the current transaction. (HyperSQL)
ACTION_ID
ACTION_ID()
Returns the current action ID for the session as a BIGINT value. The database maintains a global incremental id which
is allocated to new transactions and new actions (statement executions) in different sessions. This value is unique to
the current action. (HyperSQL)
TRANSACTION_CONTROL
TRANSACTION_CONTROL()
Returns the current transaction model for the database. Returns LOCKS, MVLOCKS or MVCC as a string.
(HyperSQL)
LOB_ID
LOB_ID( <column reference> )
Returns internal ID of a lob as a BIGINT value. Lob ID's are unique and never reused. The <column reference> is the
name of the column (or variable, or argument) which is a CLOB or BLOB. Returns null if the value is null. (HyperSQL)
ROWNUM
ROWNUM()
ROW_NUMBER
ROW_NUMBER() OVER()
Returns the current row number (from 1) being processed in a select statement. This has the same semantics as the
ROWNUM pseudo-column in Oracle syntax mode, but can be used in any syntax mode. The function is used in a
SELECT or DELETE statement. The ROWNUM of a row is incremented as the rows are added to the result set. It is
therefore possible to use a condition such as WHERE ROWNUM() < 10, but not ROWNUM() > 10 or ROWNUM
= 10. The ROW_NUMBER() OVER() alternative performs the same function and is included for compatibility with
other database engines.(HyperSQL)
209
Mode of Operation
The decision to run HyperSQL as a separate server process or as an in-process database should be based on the
following:
When HyperSQL is run as a server on a separate machine, it is isolated from hardware failures and crashes on the
hosts running the application.
When HyperSQL is run as a server on the same machine, it is isolated from application crashes and memory leaks.
Server connections are slower than in-process connections due to the overhead of streaming the data for each JDBC
call.
You can reduce client/server traffic using SQL Stored procedures to reduce the number of JDBC execute calls.
During development, it is better to use a Server with server.silent=false, which displays the statements sent to the
server on the console window.
To improve speed of execution for statements that are executed repeatedly, reuse a parameterized PreparedStatement
for the lifetime of the connections.
Database Types
There are three types of database, mem:, file: and res:. The mem: type is stored all in memory and not persisted
to file. The file: type is persisted to file. The res: type is also based on files, but the files are loaded from
the classpath, similar to resource and class files. Changes to the data in file: databases are persisted, unless the
database is readonly, or files_readonly (using optional property settings). Changes to res: databases are
not persisted.
Readonly Databases
A file: catalog can be made readonly permanently, or it can be opened as readonly. To make the database readonly, the
property, value pair, readonly=true can be added to the .properties file of the database. The SHUTDOWN
command must be used to close the database before making this change.
It is also possible to open a normal database as readonly. For this, the property can be included in the URL of the
first connection to the database.
210
System Management
With readonly databases, it is still possible to insert and delete rows in TEMP tables.
Tables
TEXT tables are designed for special applications where the data has to be in an interchangeable format, such as CSV
(comma separated values). TEXT tables should not be used for routine storage of data that changes a lot.
MEMORY tables and CACHED tables are generally used for data storage. The difference between the two is as
follows:
The data for all MEMORY tables is read from the *.script file when the database is started and stored in memory.
In contrast the data for cached tables is not read into memory until the table is accessed. Furthermore, only part of
the data for each CACHED table is held in memory, allowing tables with more data than can be held in memory.
When the database is shutdown in the normal way, all the data for MEMORY tables is written out to the disk. In
comparison, the data in CACHED tables that has changed is written out during operation and at shutdown.
The size and capacity of the data cache for all the CACHED tables is configurable. This makes it possible to allow
all the data in CACHED tables to be cached in memory. In this case, speed of access is good, but slightly slower
than MEMORY tables.
For normal applications it is recommended that MEMORY tables are used for small amounts of data, leaving
CACHED tables for large data sets. For special applications in which speed is paramount and a large amount of free
memory is available, MEMORY tables can be used for large tables as well.
You can change the type of the table with the SET
MEMORY }statement.
TABLE
<table
name>
TYPE
CACHED
Large Objects
HyperSQL 2.0 supports dedicated storage and access to BLOB and CLOB objects. These objects can have huge sizes.
BLOB or CLOB is specified as the type of a column of the table. Afterwards, rows can be inserted into the table using
a PreparedStatement for efficient transfer of large LOB data to the database. In mem: catalogs, CLOB and BLOB data
is stored in memory. In file: catalogs, this data is stored in a single separate file which has the extension *.lobs. The
size of this file can grow to huge, terabyte figures. By default, a minimum 32 KB is allocated to each LOB. You can
reduced this if your LOBs are generally smaller.
LOB data should be stored in the database using a JDBC PreparedStatement object. The streaming methods send the
LOB to the database in one operation as a binary or character stream. Inside the database, the disk space is allocated
211
System Management
as needed and the data is saved as it is being received. LOB data should be retrieved from the database using a JDBC
ResultSet method. When a streaming method is used to retrieve a LOB, it is retrieved in large chunks in a transparent
manner. LOB data can also be retrieved as String or byte[], but these methods use more memory and may not be
practical for large objects.
LOB data is not duplicated in the database when a lob is copied from one table to another. The disk space is reused
when a LOB is deleted and is no longer contained in any table. This happens only at the time of a CHECKPOINT.
By using a dedicated LOB store, HyperSQL achieves consistently high speeds (usually over 20MB / s) for both storage
and retrieval of LOBs.
There is an internal LOBS schema in the database to store the id's, sizes and addresses of the LOBs (but not the actual
LOBS) in a few system tables. This schema is stored in the database as MEMORY tables. Therefore the amount of
JVM memory should be increased when more than tens of thousands of LOBs are stored in the database. If your
database contains more than a few hundreds of thousands of LOBs and memory use becomes an issue, you can change
one or all LOB schema tables to CACHED tables. See statements below:
TABLE
TABLE
TABLE
TABLE
Deployment context
The files used for storing HyperSQL database data are all in the same directory. New files are always created and
deleted by the database engine. Two simple principles must be observed:
The Java process running HyperSQL must have full privileges on the directory where the files are stored. This
include create and delete privileges.
The file system must have enough spare room both for the 'permanent' and 'temporary' files. The default maximum
size of the *.log file is 50MB. The *.data file can grow to up to 64GB (more if the default has been increased).
The .backup file can be up to the size of the *.data file. The *.lobs file can grow to several terabytes. The temporary
files created at the time of a SHUTDOWN can be equal in size to the *.script file and the .data file.
In desktop deployments, virus checker programs may interfere with the creation and modification of database files.
You should exclude the directory containing the database files from virus checking.
212
System Management
System Operations
A database is opened when the first connection is successfully made. It remains open until the SHUTDOWN command
is issued. If the connection property shutdown=true is used for the first connection to the database, the database is
shutdown when the last connection is closed. Otherwise the database remains open and will accept the next connection
attempt.
The SHUTDOWN command shuts down the database properly and allows the database to be reopened quickly. This
command may take some seconds as it saves all the modified data in the .script and .data files. Variants of
SHUTDOWN such as SHUTDOWN COMPACT and SHUTDOWN SCRIPT can be used from time to time to reduce the
overall size of the database files. Another variant is SHUTDOWN IMMEDIATELY which ensures all changes to data
are stored in the .log file but does not save the changes in .script and .data files. The shutdown is performed
quickly but the database will take much longer to reopen.
During the lifetime of the database the checkpoint operation may be performed from time to time. The SET FILES
LOG SIZE < value > setting and its equivalent URL property determine the frequency of automatic checkpoints.
An online backup also performs a checkpoint when the backup is not a hot backup. A checkpoint can be performed by
the user at any time using the CHECKPOINT statement. The main purpose of checkpoints is to reduce the total size of
database files and to allow a quick restart in case the database is closed without a proper shutdown. The CHECKPOINT
DEFRAG variant compacts the .data file in a similar way to SHUTDOWN COMPACT does. Obviously this variant
213
System Management
takes much longer than a normal CHECKPOINT. A database setting allows a CHECKPOINT DEFRAG to be performed
automatically when wasted space in the .data file exceeds the specified percentage.
In a multi-user application, automatic or user-initiated checkpoints are delayed until all other sessions have
committed or rolled back. During a checkpoint, other sessions cannot access the database tables but can access the
INFORMATION_SCHEMA system tables.
The directory name must end with a slash to distinguish it as a directory, and the whole string must be in single quotes
like so: 'subdir/nesteddir/'.
Normal backup may take a long time with very large databases. Hot backup may be used in those situations. This type
of backup does not perform a checkpoint and allows access to the database while backup is in progress.
BACKUP DATABASE TO <directory name> NOT BLOCKING [ AS FILES ]
If you add AS FILES to the statements, the database files are backed up as separate files in the directory, without any
gzip compression or tar archiving.
See the next section under Statements for details about the command and its options. See the sections below about
restoring a backup.
214
System Management
where tardir/backup.tar is a file path to the *.tar or *.tar.gz file to be created in your file system, and
dbdir/dbname is the file path to the catalog file base name (in same fashion as in server.database.* settings
and JDBC URLs with catalog type file:.
Examining Backups
You can list the contents of backup tar files with DbBackup on your operating system command line, or with any
Pax-compliant tar or pax client (this includes GNU tar),
You can also give regular expressions at the end of the command line if you are only interested in some of the file
entries in the backup. Note that these are real regular expressions, not shell globbing patterns, so you would use .
+script to match entries ending in "script", not *script.
You can examine the contents of the backup in their entirety by restoring the backup, as explained in the following
section, to a temporary directory.
Restoring a Backup
You use DbBackup on your operating system command line to restore a catalog from a backup.
where tardir/backup.tar is a file path to the *.tar or *.tar.gz file to be read, and dbdir is the target directory
to extract the catalog files into. Note that dbdir specifies a directory path, without the catalog file base name. The
files will be created with the names stored in the tar file (and which you can see as described in the preceding section).
After restoring the database, you can connect to it as usual.
Encrypted Databases
HyperSQL supports encrypted databases. Encryption services use the Java Cryptography Extensions (JCE) and uses
the ciphers installed with the JRE. HyperSQL itself does not contain any cryptography code.
Three elements are involved in specifying the encryption method and key. A cipher, together with its configuration is
identified by a string which includes the name of the cipher and optional parameters. A provider is the fully qualified
class name of the cipher provider. A key is represented as a hexadecimal string.
215
System Management
should be specified as the provider. The CRYPT_KEY function returns a hexadecimal key. The function call can be
made in any HyperSQL database, so long as the provider class is on the classpath. This key can be used to create a
new encrypted database. Calls to this function always return different keys, based on a generated random values.
As an example, a call to CRYPT_KEY('Blowfish', null) returned the string, '604a6105889da65326bf35790a923932'.
To create a new database, the URL below is used:
jdbc:hsqldb:file:<database
path>;crypt_key=604a6105889da65326bf35790a923932;crypt_type=blowfish
The third property name is crypt_provider. This is specified only when the provider is not the default provider.
HyperSQL works with any symmetric cipher that may be available from the JVM.
The files that are encrypted include the .script, .data, .backup and .log files. The .lobs file is not encrypted by default.
The property crypt_lobs=true must be specified to encrypt the .lobs file. When this property is used, the blobs and
clobs are both compressed and encrypted.
Speed Considerations
General operations on an encrypted database are performed the same as with any database. However, some operations
are significantly slower than with the equivalent cleartext database. With MEMORY tables, there is no difference
to the speed of SELECT statements, but data change statements are slower. With CACHED tables, the speed of all
statements is slower.
Security Considerations
Security considerations for encrypted databases have been discussed at length in HSQLDB discussion groups.
Development team members have commented that encryption is not a panacea for all security needs. The following
issues should be taken into account:
Encrypted files are relatively safe in transport, but because databases contain many repeated values and words,
especially known tokens such as CREATE, INSERT, etc., breaking the encryption of a database may be simpler
than an unknown file.
Only the files are encrypted, not the memory image. Poking into computer memory, while the database is open,
will expose the contents of the database.
HyperSQL is open source. Someone who has the key, can compile and use a modified version of the program that
saves a full cleartext dump of an encrypted database
Therefore encryption is generally effective only when the users who have access to the crypt key are trusted.
216
System Management
Early versions of JAMon were developed with HSQLDB and had to be integrated into HSQLDB at code level. The
latest versions can be added on as a proxy in a much simpler fashion.
Database Security
HyperSQL has extensive security features which are implemented at different levels and covered in different chapters
of this guide.
1. The server can use SSL and IP address access control lists. See the HyperSQL Network Listeners (Servers) chapter.
2. You can define a system property to stop the database engine accessing the Java static functions that are on the
classpath, apart from a limited set that you allow. See Securing Access to Classes in the SQL-Invoked Routines
chapter.
3. You can define a system property to allow access to files on the file system outside the database directory and its
children. This access is only necessary if you use TEXT tables. See the Text Tables chapter.
4. The database files can be encrypted. Discussed in this chapter.
5. Within the database, the DBA privileges are required for system and maintenance jobs.
217
System Management
6. You can define users and roles and grant them access on different database objects. Each user has a password and
is granted a set of privileges. See the Access Control chapter.
7. You can define a password complexity check function for new and changed passwords. This is covered below
under Authentication Settings.
8. You can use external authentication instead of internally stored password to authenticate users for each database.
This is covered below under Authentication Settings.
HyperSQL security is multi-layered and avoids any loopholes to circumvent security. It is however the user's
responsibility to enable the required level of security.
Security Defaults
The default setting are generally adequate for most embedded use of the database or for servers on the host that are
accessed from the same machine. For servers accessed within a network, and especially for those accessed from outside
the network, additional security settings must be used.
The default settings for server and web server do not use SSL or IP access control lists. These features are enabled
programatically, or with the properties used to start the server.
The default settings allow a database user with the DBA role or with schema creation role to access static functions on
the classpath. You can disable this feature or limit it to specific classes and methods. This can be done programatically
or by setting a system property when you start a server.
If access to specific static functions is granted, then these functions must be considered as part of the database program
and checked for any security flaws before inclusion in the classpath.
The default settings do not allow a user to access files outside the database directory. This access is for TEXT table
source files. You can override this programatically or with a system property when you start a server.
The encryption of database file does not utilise any user-supplied information for encryption keys. This level of security
is outside the realm of users and passwords.
The first user for a new database has the DBA role. This user name was always SA in older versions of HSQLDB,
but not in the latest versions. The name of the first DBA user and its password can be specified when the database is
created by the first connection to the database. These settings are then stored in the database.
The initial user with the DBA role should be used for admin purposes only. At least one additional role should be
created for normal database use in the application and at least one additional user should be created and granted this
role. The new role should not be given the DBA role. It can be given the CREATE_SCHEMA role, which allows it to
create and access multiple schemas. Alternatively, the user with the DBA role can create the schemas and their objects
and then grant specific privileges on the objects to the non-DBA role.
Authentication Control
Authentication is the mechanism that determines if a user can access the database at all. Once authentication is
performed, the authorization mechanism is used to determine which database objects the particular user can access.
The default authentication mechanism is password authentication. Each user is created with a password, which is
stored in the database and checked each time a new database connection is created.
Password Complexity Check
HyperSQL allows you to define a function that checks the quality of the passwords defined in the database. The
passwords are stored in the database. Each time a user connects, the user's name and password are checked against the
stored list of users and passwords. The connection attempt is rejected if there is no match.
218
System Management
External Authentication
You can use an external authentication mechanism instead of the internal authentication mechanism. HyperSQL allows
you to define a function that checks the combination of database unique name, user name, and password for each
connection attempt. The function can use external resources to authenticate the user. For example, a directory server
may be used. The password may be ignored if the external resource can verify the user's credential without it.
You can override external authentication for a user with the ALTER USER statement. See the Access Control chapter
Statements
System level statements are listed in this section. Statements that begin with SET DATABASE or SET FILES are for
properties that have an effect on the normal operation of HyperSQL. The effects of these statements are also discussed
in different chapters.
System Operations
These statements perform a system level action.
SHUTDOWN
shutdown statement
<shutdown statement> ::= SHUTDOWN [IMMEDIATELY | COMPACT | SCRIPT]
Shutdown the database. If the optional qualifier is not used, a normal SHUTDOWN is performed. A normal
SHUTDOWN ensures all data is saved correctly and the database opens without delay on next use.
SHUTDOWN
Normal shutdown saves all the database files, then deletes the .log file (and the .backup
file in the default mode). This does the same thing as CHECKPOINT, but closes the
database when it completes. The database opens without delay on next used.
SHUTDOWN
IMMEDIATELY
Saves the *.log file and closes the database files. This is the quickest form of
shutdown. This command should not be used as the routine method of closing the
database, because when the database is accessed next time, it may take a long time
to start.
SHUTDOWN COMPACT
This is similar to normal SHUTDOWN, but reduces the *.data file to its minimum
size. It can take much longer than normal SHUTDOWN. This shouldn't be used as
routine.
SHUTDOWN SCRIPT
This is similar to SHUTDOWN COMPACT, but it does not rewrite the *.data
and text table files. After SHUTDOWN SCRIPT, only the *.script and
*.properties files remain. At the next startup, these files are processed and the
*.data file is created if there are cached tables. This command in effect performs
part of the job of SHUTDOWN COMPACT, leaving the other part to be performed
automatically at the next startup.
This command produces a full script of the database which can be edited for special
purposes prior to the next startup.
Only a user with the DBA role can execute this statement.
BACKUP DATABASE
backup database statement
219
System Management
<backup database statement> ::= BACKUP DATABASE TO <file path> [SCRIPT] {[NOT]
COMPRESSED} {[NOT] BLOCKING} [AS FILES]
Backup the database to specified <file path> for archiving purposes.
The <file path> can be in two forms. If the <file path> ends with a forward slash, it specifies a directory. In
this case, an automatic name for the archive is generated that includes the date, time and the base name of the database.
The database is backed up to this archive file in the specified directory. The archive is in .tar.gz or .tar format
depending on whether it is compressed or not.
If the <file path> does not end with a forward slash, it specifies a user-defined file name for the backup archive.
The file extension must be either .tar.gz or .tar and this must match the compression option.
The default set of options is COMPRESSED BLOCKING.
If SCRIPT is specified, the backup will contain a *.script file, which contain all the data and settings of the
database. This type of backup is suitable for smaller databases. With larger databases, this takes a long time. When
the SCRIPT option is no used, the backup set will consist of the current snapshot of all database files.
If NOT COMPRESSED is specified, the backup is a tar file, without compression. Otherwise, it is in gzip format.
The qualifier, BLOCKING, means all database operations are suspended during backup. During backup, a
CHECKPOINT command is silently executed. This mode is always used when SCRIPT is specified.
Hot backup is performed if NOT BLOCKING is specified. In this mode, the database can be used during backup. This
mode should only be used with very large databases. A hot backup set is less compact and takes longer to restore and
use than a normal backup set produced with the BLOCKING option. You can perform a CHECKPOINT just before
a hot backup in order to reduce the size of the backup set.
If AS FILES is specified, the database files are copied to a directory specified by <file path> without any compression.
The file path must be a directory. If the directory does not exist, it is created. The file path may be absolute or relative.
If it is relative, it is interpreted as relative to the location of database files. When AS FILES is specified, SCRIPT or
COMPRESSED options are not available. The backup can be performed as BLOCKING or NOT BLOCKING.
The HyperSQL jar also contains a program that creates an archive of an offline database. It also contains a program
to expand an archive into database files. These programs are documented in this chapter under Backing up Database
Catalogs.
Only a user with the DBA role can execute this statement.
CHECKPOINT
checkpoint statement
<checkpoint statement> ::= CHECKPOINT [DEFRAG]
Closes the database files, rewrites the script file, deletes the log file and reopens the database.
If DEFRAG is specified, also shrinks the *.data file to its minimum size. CHECKPOINT DEFRAG time depends on
the size of the database and can take a long time with huge databases.
A checkpoint on a multi-user database waits until all other sessions have committed or rolled back. While the
checkpoint is in progress other sessions are kept waiting. Checkpoint does not close any sessions.
Only a user with the DBA role can execute this statement.
SCRIPT
220
System Management
script statement
<script statement> ::= SCRIPT [<file name>]
Returns a script containing SQL statements that define the database, its users, and its schema objects. If <file name>
is not specified, the statements are returned in a ResultSet, with each row containing an SQL statement. No data
statements are included in this form. The optional file name is a single-quoted string. If <file name> is specified,
then the script is written to the named file. In this case, all the data in all tables of the database is included in the script
as INSERT statements.
Only a user with the DBA role can execute this statement.
Database Settings
These statements change the database settings.
SET DATABASE COLLATION
set database collation statement
<set database collation statement> ::= SET DATABASE COLLATION <collation name>
[ NO PAD | PAD SPACE ]
Each database can have its own default collation. Sets the collation from the set of collations supported by HyperSQL.
Once this command has been issued, the database can be opened in any JVM and will retain its collation.
All collations pad the shorter string with spaces when two strings are compared. If NO PAD is specified, comparison
is performed without padding. The default system collation is named SQL_TEXT. To use the default without padding
use SET DATABASE COLLATION SQL_TEXT NO PAD.
After you change the collation for a database that contains collated data, you must execute SHUTDOWN COMPACT
or SHUTDOWN SCRIPT in order to recreate the indexes.
Only a user with the DBA role can execute this statement.
Collations are discussed in the Schemas and Database Objects chapter. Some examples of setting the database collation
follow:
-- this collation is an ascii collation with Upper Case Comparison (coverts strings to uppercase
for comparison)
SET DATABASE COLLATION SQL_TEXT_UCC
-- this collation is case-insensitive English
SET DATABASE COLLATION "English 1"
-- this collation is case-sensitive French
SET DATABASE COLLATION "French 2"
221
System Management
This setting applies to all sessions. Individual sessions can change the value with the SET SESSION RESULT
MEMORY ROWS statement. The default is 0, meaning all result sets are held in memory.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property hsqldb.result_max_memory_rows.
SET DATABASE DEFAULT TABLE TYPE
set database default table type statement
<set database default table type> ::= SET DATABASE DEFAULT TABLE TYPE { CACHED
| MEMORY }
Sets the type of table created when the next CREATE TABLE statement is executed. The default is MEMORY.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property hsqldb.default_table_type.
SET DATABASE EVENT LOG LEVEL
set database event log level statement
<set database event log level> ::= SET DATABASE EVENT LOG [ SQL ] LEVEL { 0
| 1 | 2 | 3 }
When the SQL option is not used, this statement sets the amount of information logged in the internal, databasespecific event log. Level 0 means no log. Level 1 means only important (error) events. Level 2 means more events,
including both important and less important (normal) events. Level 3 includes even more details. For readonly and
mem: databases, if the level is set above 0, the log messages are directed to stderr.
The events are logged in a file with the extension .app.log alongside the main database files.
This is equivalent to the connection property hsqldb.applog.
When the SQL option is used, this statement logs the SQL statements as they are executed. Each log line contains the
timestamp and the session number, followed by the SQL statement and JDBC arguments if any.
Levels 1, 2 and 3 are supported. Level 1 only logs commits and rollbacks, while Level 2 and 3 log all statements. Level
2 truncates long statements, while level 3 reports the full statement and parameter values.
The logged lines are stored in a file with the extension .sql.log alongside the main database files.
This is equivalent to the connection property hsqldb.sqllog.
Only a user with the DBA role can execute this statement.
In version 2.3, the equivalent URL properties, hsqldb.app_log and hsqldb.sql_log, can be used not only
for a new database, but also when opening an existing file database to change the event log level.
An extract from an .sql.log file created with log Level 3 is shown below. The numbers after the timestamp (10 and
1) show the session number. The values for prepared statement parameters are shown in parentheses at the end of
the statement.
222
System Management
2012-11-29
2012-11-29
2012-11-29
2012-11-29
10:40:40.250
10:40:40.250
10:40:40.265
10:40:40.265
1
1
1
1
SET DATABASE GC
set database gc statement
<set database gc statement> ::= SET DATABASE GC <unsigned integer literal>
An optional property which forces calls to System.gc() after the specified number of row operations. The default
value for this property is 0, which means no System.gc() calls. Usual values for this property range from 10000
depending on the system and the memory allocation. This property may be useful in some in-process deployments,
especially with older JVM implementations.
Only a user with the DBA role can execute this statement.
SET DATABASE TEXT TABLE DEFAULTS
set database text table defaults statement
<set database text table defaults statement> ::= SET DATABASE TEXT TABLE DEFAULTS
<character literal>
An optional property to override default text table settings. The string literal has the same format as the string used for
setting the data source of a text table, but without the file name. See the Text Tables chapter.
Only a user with the DBA role can execute this statement.
SET DATABASE TRANSACTION CONTROL
set database transaction control statement
<set database transaction control statement> ::= SET DATABASE TRANSACTION CONTROL
{ LOCKS | MVLOCKS | MVCC }
Set the concurrency control system for the database. It can be issued only when all sessions have been committed or
rolled back. This command and its modes is discussed in the Sessions and Transactions chapter.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property hsqldb.tx.
SET DATABASE TRANSACTION ROLLBACK ON CONFLICT
set database transaction rollback on conflict statement
<set database transaction rollback on conflict statement> ::= SET DATABASE
TRANSACTION ROLLBACK ON CONFLICT { TRUE | FALSE }
When a transaction deadlock or conflict is about to happen, the current transaction is rolled back and an exception is
raised. When this property is set false, the transaction is not rolled back. Only the latest statement that would cause the
conflict is undone and an exception is raised. The property should not be changed unless the application can quickly
perform an alternative statement and complete the transaction. It is provided for compatibility with other database
engines which do not roll back the transaction upon deadlock. This command is also discussed in the Sessions and
Transactions chapter.
223
System Management
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property hsqldb.tx_conflict_rollback.
SET DATABASE DEFAULT ISOLATION LEVEL
set database default isolation level statement
<set database default isolation level> ::= SET DATABASE DEFAULT ISOLATION LEVEL
{ READ COMMITTED | SERIALIZABLE }
Sets the transaction isolation level for new sessions. The default is READ COMMITTED. Each session can also set
its isolation level.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property hsqldb.tx_level.
SET DATABASE UNIQUE NAME
set database unique name
<set database unique name statement> ::= SET DATABASE UNIQUE NAME <identifier>
Each HyperSQL catalog (database) has an engine-generated internal name. This name is a 16 character long string,
beginning with HSQLDB and based on the time of creation of the database. The name is used for the log events that
are sent to external logging frameworks. The new name must be exactly 16 characters long with no spaces.
Only a user with the DBA role can execute this statement.
224
System Management
<set database sql names statement> ::= SET DATABASE SQL NAMES { TRUE | FALSE }
Enable or disable full enforcement of the rule that prevents SQL keywords being used for database object names such
as columns and tables. The default is FALSE, meaning disabled.
SQL Standard requires enforcement. It is better to enable this check, in order to improve the quality and correctness
of SQL statements.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property sql.enforce_names.
SET DATABASE SQL REGULAR NAMES
set database sql regular names statement
<set database sql regular names statement> ::= SET DATABASE SQL REGULAR NAMES
{ TRUE | FALSE }
Enable or disable use of the underscore character at the beginning, or the dollar character anywhere in database object
names such as columns and tables. The default is TRUE, meaning disabled.
SQL Standard does not allow the underscore character at the start of names, and does not allow the dollar character
anywhere in a name. This setting can be changed for compatibility with existing database or for porting databases
which include names that do not conform to the Standard.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property sql.regular_names.
SET DATABASE SQL REFERENCES
set database sql references statement
<set database sql references statement> ::= SET DATABASE SQL REFERENCES { TRUE
| FALSE }
This command can enable or disable full enforcement of the rule that prevents ambiguous column references in SQL
statements (usually SELECT statements). A column reference is ambiguous when it is not qualified by a table name
or table alias and can refer to more than one column in a JOIN list.
The property is FALSE by default.
SQL Standard requires enforcement. It is better to enable this check, in order to improve the quality and correctness
of SQL statements. When false, the first matching table is used to resolve the column reference.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property sql.enforce_refs.
SET DATABASE SQL TYPES
set database sql types statement
<set database sql types statement> ::= SET DATABASE SQL TYPES { TRUE | FALSE }
This command can enable or disable full enforcement of the rules that prevents illegal type conversions and parameters
or nulls without type in SQL statements (usually SELECT statements). For example an INTEGER column or a DATE
column cannot be compared to a character string or searched with a LIKE expression when the property is TRUE.
225
System Management
226
System Management
The JDBC Specification up to version 4.1 does not support some SQL Standard built-in types, therefore these types
must be translated to a supported type when accessed through JDBC ResultSet and PreparedStatement methods.
If the property is true, the TIME / TIMESTAMP WITH TIME ZONE types and INTERVAL types are represented in
JDBC methods of ResultSetMetaData and DatabaseMetaData as JDBC datetime types without time zone
and the VARCHAR type respectively. The original type names are preserved.
The property is TRUE by default. If set to FALSE, the type codes for WITH TIME ZONE types will be SQL type
codes as opposed to JDBC type codes.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property jdbc.translate_tti_types.
SET DATABASE SQL CHARACTER LITERAL
set database sql character literal
<set database sql character literal statement> ::= SET DATABASE SQL CHARACTER
LITERAL { TRUE | FALSE }
When the property is TRUE, the data type of character literal strings is CHARACTER. When the property is FALSE
the data type is VARCHAR.
Setting this property FALSE results in strings not padded with spaces in CASE WHEN expressions that have multiple
literal alternatives.
SQL Standard requires the CHARACTER type.
The property is TRUE by default.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property sql.char_literal.
SET DATABASE SQL CONCAT NULLS
set database sql concat nulls statement
<set database sql concat nulls statement> ::= SET DATABASE SQL CONCAT NULLS
{ TRUE | FALSE }
When the property is TRUE, concatenation of a null value with a not-null value results in a null value. When the
property is FALSE this type of concatenation result in the not-null value.
Setting this property FALSE results in concatenation behaviour similar to Oracle or MS SQL Server.
SQL Standard requires a NULL result.
The property is TRUE by default.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property sql.concat_nulls.
SET DATABASE SQL UNIQUE NULLS
set database sql unique nulls statement
227
System Management
<set database sql unique nulls statement> ::= SET DATABASE SQL UNIQUE NULLS
{ TRUE | FALSE }
When the property is TRUE, with multi-column UNIQUE constraints, it is possible to insert multiple rows for which
one or more of the values for the constraint columns is NULL. When the property is FALSE, if there is any not-null
value in the columns, then the set of values is compared to the existing rows and if there is a match, an exception is
thrown. The setting FALSE, makes the behaviour more restrictive. For example, inserting (1, null) twice is possible
by default, but not possible when the property is FALSE.
Setting this property FALSE results in UNIQUE constraint behaviour similar to Oracle.
SQL Standard requires the default (TRUE) behaviour.
The property is TRUE by default.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property sql.unique_nulls.
SET DATABASE SQL CONVERT TRUNCATE
set database sql convert truncate
<set database sql convert truncate statement> ::= SET DATABASE SQL CONVERT
TRUNCATE { TRUE | FALSE }
When the property is TRUE, conversion from a floating point value (a DOUBLE value) to an integral type always
truncates the fractional part. When the property is FALSE, rounding takes place instead of truncation. For example,
assigning the value 123456E-2 to an integer column will result in 1234 by default, but 1235 when the property is
FALSE.
Standard SQL considers this behaviour implementation dependent.
The property is TRUE by default.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property sql.convert_trunc.
SET DATABASE SQL AVG SCALE
set database sql avg scale
<set database sql avg scale> ::= SET DATABASE SQL AVG SCALE <numeric value>
By default, the result of division and the AVG and MEDIAN aggregate functions has the same type as the aggregated
type of the values. This includes the scale. The scale specified with this property is used if it is larger than the scale of
the operation. For example, the average of 5 and 10 is 7 by default, but 7.50 if the scale is specified as 2. The result
of 7/3 is 2 by default but 2.33 if the scale is specified as 2.
Standard SQL considers this behaviour implementation dependent. Some databases use a default scale larger than zero.
The property is 0 by default.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property sql.avg_scale.
228
System Management
229
System Management
This property is FALSE by default and should only be used in special circumstances where compatibility with a
different database is required.
When the property is TRUE, all declarations of VARCHAR columns in tables or other database objects are converted to
VARCHAR_IGNORECASE. This has a global effect on the database, unlike the SET IGNORECASE statement which
applies only to the current session.
The property is FALSE by default.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property sql.ignore_case.
SET DATABASE SQL LIVE OBJECT
set database sql live object
<set database sql live object> ::= SET DATABASE SQL LIVE OBJECT { TRUE | FALSE }
This property is FALSE by default and can only be used in mem: databases.
When the property is FALSE, all java objects stored in a column of type OTHER are serialized. When the property
is FALSE, objects are not serialized at all.
This is equivalent to the connection property sql.live_object.
SET DATABASE SQL SYNTAX DB2
set database sql syntax DB2
<set database sql syntax DB2 statement> ::= SET DATABASE SQL SYNTAX DB2 { TRUE
| FALSE }
This property, when set TRUE, enables support for some elements of DB2 syntax. Single-row SELECT statements
(SELECT <expression list> without the FROM clause) are supported and treated as the SQL Standard
equivalent, VALUES <expression list>. The DUAL table is supported, as well as the ROWNUM pseudocolumn. BINARY type definitions such as VARCHAR(L) FOR BIT DATA are supported. Empty DEFAULT clauses
in column definitions are supported.
The property is FALSE by default.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property sql.syntax_db2.
SET DATABASE SQL SYNTAX MSS
set database sql syntax MSS
<set database sql syntax MSS statement> ::= SET DATABASE SQL SYNTAX MSS { TRUE
| FALSE }
This property, when set TRUE, enables support for some elements of SQLServer syntax. Single-row SELECT
statements (SELECT <expression list> without the FROM clause) are supported and treated as the SQL
Standard equivalent, VALUES <expression list>. The parameters of CONVERT() function are switched in
this mode.
The property is FALSE by default.
230
System Management
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property sql.syntax_mss.
SET DATABASE SQL SYNTAX MYS
set database sql syntax MYS
<set database sql syntax MYS statement> ::= SET DATABASE SQL SYNTAX MYS { TRUE
| FALSE }
This property, when set TRUE, enables support for some elements of MySQL syntax. The TEXT data type is translated
to LONGVARCHAR.
In CREATE TABLE statements, [NOT NULL | NULL] can be used immediately after the column type name and
before the DEFAULT clause. AUTO_INCREMENT is translated to the GENERATED BY DEFAULT AS IDENTITY
clause.
Single-row SELECT statements (SELECT <expression list> without the FROM clause) are supported and
treated as the SQL Standard equivalent, VALUES <expression list>.
The property is FALSE by default.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property sql.syntax_mys.
SET DATABASE SQL SYNTAX ORA
set database sql syntax ORA
<set database sql syntax ORA statement> ::= SET DATABASE SQL SYNTAX ORA { TRUE
| FALSE }
This property, when set TRUE, enables support for some elements of Oracle syntax. The DUAL table is supported,
together with ROWNUM, NEXTVAL and CURRVAL syntax and semantics.
The non-standard types are translated to supported standard types. BINARY_DOUBLE and BINARY_FLOAT are
translated to DOUBLE. LONG RAW and RAW are translated to VARBINARY with long or medium length limits.
LONG and VARCHAR2 are translated to VARCHAR with long or medium length limits. NUMBER is translated to
DECIMAL. Some extra type conversions and no-arg functions are also allowed in this mode.
The property is FALSE by default.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property sql.syntax_ora.
SET DATABASE SQL SYNTAX PGS
set database sql syntax PGS
<set database sql syntax PGS statement> ::= SET DATABASE SQL SYNTAX PGS { TRUE
| FALSE }
This property, when set TRUE, enables support for some elements of PosgtreSQL syntax. The TEXT data type is
translated to LONGVARCHAR, while the SERIAL data types is translated to BIGINT together with GENERATED
BY DEFAULT AS IDENTITY.
231
System Management
Single-row SELECT statements (SELECT <expression list> without the FROM clause) are supported and
treated as the SQL Standard equivalent, VALUES <expression list>.
The functions NEXTVAL(<sequence name string>), CURRVAL(<sequence name string>) and
LASTVAL() are supported in this compatibility mode.
The property is FALSE by default.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property sql.syntax_pgs.
SET DATABASE REFERENTIAL INTEGRITY
set database referential integrity statement
<set database referential integrity statement> ::= SET DATABASE REFERENTIAL
INTEGRITY { TRUE | FALSE }
This command enables or disables the enforcement of referential integrity constraints (foreign key constraints), check
constraints apart from NOT NULL and execution of triggers. By default, all constraints are checked.
The only legitimate use of this statement is before importing large amounts of external data into tables that have
existing FOREIGN KEY constraints. After import, the statement must be used again to enable constraint enforcement.
If you are not sure the data conforms to the constraints, run queries to verify all rows conform to the FOREIGN KEY
constraints and take appropriate actions for the rows that do not conform.
A query example to return the rows in a foreign key table that have no parent is given below:
Example 11.7. Finding foreign key rows with no parents after a bulk import
SELECT * FROM foreign_key_table LEFT OUTER JOIN primary_key_table
ON foreign_key_table.fk_col = primary_key_table.pk_col WHERE primary_key_table.pk_col IS NULL
Only a user with the DBA role can execute this statement.
232
System Management
Warning: The old, non-incremental setting, FALSE, shouldn't be used at all when the data file is larger than 4GB. If
it is used, the data file is not fully backed up and can result in corruption. The zip compression method is used in this
mode and it is limited to 4GB size.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property hsqldb.inc_backup.
SET FILES CACHE ROWS
set files cache rows statement
<set files cache rows statement> ::= SET FILES CACHE ROWS <unsigned integer
literal>
Sets the maximum number of rows (of CACHED tables) held in the memory cache. The default is 50000 rows.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property hsqldb.cache_rows.
SET FILES CACHE SIZE
set files cache size statement
<set files cache size statement> ::= SET FILES CACHE SIZE <unsigned integer
literal>
Sets maximum amount of data (of CACHED tables) in kilobytes held in the memory cache. The default is 10000
kilobytes. Note the amount of memory used is larger than this amount, which does not account for Java object size
overheads.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property hsqldb.cache_size.
SET FILES DEFRAG
set files defrag statement
<set files defrag statement> ::= SET FILES DEFRAG <unsigned integer literal>
Sets the threshold for performing a DEFRAG during a checkpoint. The <unsigned integer literal> is the
percentage of abandoned space in the *.data file. When a CHECKPOINT is performed either as a result of the .log
file reaching the limit set by SET FILES LOG SIZE m, or by the user issuing a CHECKPOINT command, the
amount of space abandoned since the database was opened is checked and if it is larger than the specified percentage, a
CHECKPOINT DEFRAG is performed instead of a CHECKPOINT. As the DEFRAG operation uses a lot of memory
and takes a long time with large databases, setting the threshold well above zero is suitable for databases that are
around than 500 MB or more.
The default is 0, which indicates no DEFRAG. Useful values are between 30 to 60.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property hsqldb.defrag_limit.
SET FILES LOG
233
System Management
234
System Management
Set the WRITE DELAY property of the database. The WRITE DELAY controls the frequency of file sync for the
log file. When WRITE_DELAY is set to FALSE or 0, the sync takes place immediately at each COMMIT. WRITE
DELAY TRUE performs the sync once every 0.5 seconds (which is the default). A numeric value can be specified
instead.
The purpose of this command is to control the amount of data loss in case of a total system crash. A delay of 1 second
means at most the data written to disk during the last second before the crash is lost. All data written prior to this has
been synced and should be recoverable.
A write delay of 0 impacts performance in high load situations, as the engine has to wait for the file system to catch up.
To avoid this, you can set write delay down to 10 milliseconds.
Each time the SET FILES WRITE DELAY statement is executed with any value, a sync is immediately performed.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection properties hsqldb.write_delay and hsqldb.write_delay_millis.
SET FILES SCALE
set files scale
<set files scale statement> ::= SET FILES SCALE <scale value>
Changes the scale factor for the .data file. The default scale is 32 and allows 64GB of data storage capacity. The scale
can be increased in order to increase the maximum data storage capacity. The scale values 8, 16, 32, 64, 128, 256, 512,
1024 are allowed. Scale value 1024 allows a maximum capacity of 2 TB.
This command can be used only when there is no data in CACHED tables. This is equivalent to the connection property
hsqldb.cache_file_scale.
The scale factor indicates the size of the unit of storage of data in bytes. For example, with a scale factor of 128, a row
containing a small amount of data will use 128 bytes. Larger rows may use multiple units of 128 bytes.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property hsqldb.cache_file_scale.
SET FILES LOB SCALE
set files lob scale
<set files lob scale statement> ::= SET FILES LOB SCALE <scale value>
Changes the scale factor for the .lobs file. The scale is interpreted in kilobytes. The default scale is 32 and allows
64TB of lob data storage capacity. The scale can be reduced in order to improve storage efficiency. If the lobs are a lot
smaller than 32 kilobytes, reducing the scale will reduce wasted space. The scale values 1, 2, 4, 8, 16, 32 are allowed.
For example if the average size of lobs is 4 kilobytes, the default scale of 32 will result in 28KB wasted space for each
lob. Reducing the lob scale to 2 will result in average 1KB wasted space for each lob.
This command can be used only when there is no lob in the database.
Only a user with the DBA role can execute this statement.
This is equivalent to the connection property hsqldb.lob_file_scale.
SET FILES LOB COMPRESSED
235
System Management
Authentication Settings
Two settings are available for authentication control.
When the default password authentication is used, the passwords can be checked for complexity according to
administrative rules
SET DATABASE PASSWORD CHECK FUNCTION
set database password check function
<set database password check function statement> ::= SET DATABASE PASSWORD CHECK
FUNCTION { <routine body> | NONE }
The routine body is the body of a function that has a VARCHAR parameter and returns a BOOLEAN. This function
checks the PASSWORD submitted as parameter and returns TRUE if it conforms to complexity checks, or FALSE,
if it does not.
The <routine body> can be an SQL block or an external Java function reference. This is covered in the SQLInvoked Routines chapter
To disable this mechanism, the token NONE can be specified instead of the <routine body>.
236
System Management
Only a user with the DBA role can execute this statement.
In the examples below, an SQL function and a Java function are used.
SET DATABASE PASSWORD CHECK FUNCTION
BEGIN ATOMIC
IF CHAR_LENGTH(PASSWORD) > 6 THEN
RETURN TRUE;
ELSE
RETURN FALSE;
END IF;
END
SET DATABASE PASSWORD CHECK FUNCTION EXTERNAL NAME
'CLASSPATH:org.anorg.access.AccessClass.accessMethod'
// the Java method is defined like this
public static boolean accessMethod(String param) {
return param != null && param.length > 6;
}
It is possible to replace the default password authentication completely with a function that uses external authentication
servers, such as LDAP. This function is called each time a user connects to the database.
SET DATABASE AUTHENTICATION FUNCTION
set database authentication function
<set database authentication function statement> ::= SET DATABASE AUTHENTICATION
FUNCTION { <external body reference> | NONE }
The routine body is an external Java function reference. This function has three String parameters. The first parameter
is the unique name of the database, the second parameter the user name, and the third parameter the password.
External authentication can be used in two different patterns. In the first pattern, user names must be stored in the
database. In the second pattern, user names shouldn't be stored in the database and any names that are stored in the
database are ignored.
In both patterns, the username and password are checked by the authentication function. If the function throws a
runtime exception then authentication fails.
In the first pattern, the function always returns null if authentication is successful.
In the second pattern, the function returns a list of role names that have been granted to the user. These roles must
match the ROLE objects that have been defined in the database.
The Java function should return an instance of org.hsqldb.jdbc.JDBCArrayBasic constructed with a String[] argument
that contains the role names.
Only a user with the DBA role can execute this statement.
SET DATABASE AUTHENTICATION FUNCTION EXTERNAL NAME
'CLASSPATH:org.anorg.access.AccessClass.accessExernalMethod'
// the Java method is defined like this
public static java.sql.Array accessExternalMethod(String database, String user, String password)
{
if (externalCheck(database, user, password) {
return null;
}
throw new RuntimeException("failed to authenticate");
}
237
Compatibility Overview
HyperSQL is used more than any other database engine for application testing and development targeted at other
databases. Over the years, this usage resulted in developers finding and reporting many obscure bugs and opportunities
for enhancements in HyperSQL. The bugs have all been fixed shortly after the report and enhancements were added
in later versions.
HyperSQL 2.x has been written to the SQL Standard and avoids the traps caused by superficial imitation of the Standard
by some other RDBMS. The SQL Standard has existed since 1989 and has been expanded over the years in several
revisions. HyperSQL follows SQL:2011. Also, the X-Open specification has defined a number of SQL functions which
are implemented by most RDBMS.
HyperSQL has many property settings that relax conformance to the Standard in order to allow compatibility with other
RDBMS, without breaking the core integrity of the database. These properties are modified with SET DATABASE
SQL statements described in the SQL Conformance Settings section of Management chapter.
HyperSQL is very flexible and provides some other properties which define a preference among various valid choices.
For example the ability to set the transaction model of the database, or the ability to define the scale of the data type
of the result of integer division or average calculation (SET DATABASE SQL AVG SCALE).
Each major RDBMS supports additional functions that are not covered by the standard. Some RDBMS use nonstandard syntax for some operations. Fortunately, most popular RDBMS products have introduced better compatibility
with the Standard in their recent versions, but there are still some portability issues. HyperSQL overcomes the potability
issues using these strategies
An extensive set of functions cover the SQL Standard, X-Open, and most of the useful functions that other RDBMS
support.
Database properties, which can be specified on the URL or as SQL statements, relax conformance to the Standard
in order to allow non-standard comparisons and assignments allowed by other RDBMS.
Specific SQL syntax compatibility modes allow syntax and type names that are supported by some popular RDBMS.
User-defined types and functions, including aggregate functions, allow any type or function that is supported by
some RDBMS to be defined and used.
Support for compatibility with other RDBMS has been extended with each version of HyperSQL.
PostgreSQL Compatibility
PostgreSQL is fairly compatible with the Standard, but uses some non-standard features.
238
Use SET DATABASE SQL SYNTAX PGS TRUE or the equivalent URL property sql.syntax_pgs=true to
enable the PostgreSQL's non-standard features. References to SERIAL, BIGSERIAL, TEXT and UUID data types,
as well as sequence functions, are translated into HyperSQL equivalents.
Use SET DATABASE TRANSACTION CONTROL MVCC if your application is multi-user.
PostgreSQL functions are generally supported.
For identity columns, PostgreSQL uses a non-standard linkage with an external identity sequence. In most cases,
this can be converted to GENERATED BY DEFAULT AS IDENTITY. In those cases where the identity sequence
needs to be shared by multiple tables, you can use a new HyperSQL 2.x feature GENERATED BY DEFAULT AS
SEQUENCE <sequence name>, which is the equivalent of the PostgreSQL implementation.
In CREATE TABLE statements, the SERIAL and BIGSERIAL types are translated into INTEGER or BIGINT,
with GENERATED BY DEFAULT AS IDENTITY. Usage of DEFAULT NEXTVAL(<sequence name>)
is supported so long as the <sequence name> refers to an existing sequence. This usage is translated into
GENERATED BY DEFAULT AS SEQUENCE <sequence name>.
In SELECT and other statements, the NEXTVAL(<sequence name>) and LASTVAL() functions are supported
and translated into HyperSQL's NEXT VALUE FOR <sequence name> and IDENTITY() expressions.
PostgreSQL uses a non-standard expression, SELECT 'A Test String' to return a single row table. The
standard form is VALUES('A Test String'). In PGS syntax mode, this type of SELECT is supported.
HyperSQL supports SQL Standard ARRAY types. PostgreSQL also supports this, but not entirely according to the
Standard.
SQL routines are portable, but some syntax elements are different and require changes.
You may need to use SET DATABASE SQL TDC { DELETE | UPDATE } FALSE statements, as PostgreSQL
does not enforce the subtle rules of the Standard for foreign key cascading deletes and updates. PostgreSQL allows
cascading operations to update a field value multiple times with different values, the Standard disallows this.
MySQL Compatibility
The latest versions of MySQL have introduced better Standard compatibility but some of these features have to be
turned on via properties. You should therefore check the current Standard compatibility settings of your MySQL
database and use the available HyperSQL properties to achieve closer results. If you avoid the few anti-Standard
features of MySQL, you can port your databases to HyperSQL.
Using HyperSQL during development and testing of MySQL apps helps to avoid data integrity issues that MySQL
may ignore.
HyperSQL does not have the following non-standard limitations of MySQL.
With HyperSQL, an UPDATE statement can update UNIQUE and PRIMARY KEY columns of a table without
causing an exception due to temporary violation of constraints. These constraints are checked at the end of execution,
therefore there is no need for an ORDER BY clause in an UPDATE statement.
MySQL foreign key constraints are not enforced by the default MyISAM engine. Be aware of the possibility of data
being rejected by HyperSQL due to these constraints.
With HyperSQL INSERT or UPDATE statements either succeed or fail due to constraint violation. MySQL has the
non-standard IGNORE override to ignore violations and alter the data, which is not accepted by HyperSQL.
Unlike MySQL, HyperSQL allows you to modify a table with an INSERT, UPDATE or DELETE statement which
selects from the same table in a subquery.
239
Follow the guidelines below for converting MySQL databases and applications.
Use SET DATABASE SQL SYNTAX MYS TRUE or the equivalent URL property sql.syntax_mys=true
to enable support for AUTO_INCREMENT and TEXT data types and several other types. These type definitions
are translated into HyperSQL equivalents.
Use MVCC with SET DATABASE TRANSACTION CONTROL MVCC if your application is multi-user.
Avoid storing invalid values, for example invalid dates such as '0000-00-00' or '2001-00-00' which are rejected by
HyperSQL.
Avoid the MySQL feature that trims spaces at the end of CHAR values.
In MySQL, a database is the same as a schema. In HyperSQL several schemas can exist in the same database and
accessed transparently. In addition a HyperSQL server supports multiple separate databases.
In MySQL, older, non-standard, forms of database object name case-sensitivity make is difficult to port applications.
Use the latest form which encloses case-sensitive names in double quotes.
MySQL functions are generally supported, including GROUP_CONCAT.
For fine control over type conversion, check the settings for SET DATABASE SQL CONVERT TRUNCATE
FALSE
If you use concatenation of possibly NULL values in your select statements, you may need to change the setting
with the SET DATABASE SQL CONCAT NULLS FALSE
If your application relies on MySQL behaviour for ordering of nulls in SELECT statements with ORDER BY, use
both SET DATABASE SQL NULLS FIRST FALSE and SET DATABASE SQL NULLS ORDER FALSE
to change the defaults.
MySQL supports most SQL Standard types (except INTERVAL types), as well as non-standard types, which are also
supported by HyperSQL. Supported types include SMALLINT, INT, BIGINT, DOUBLE, FLOAT, DECIMAL,
NUMERIC, VARCHAR, CHAR, BINARY, VARBINARY, BLOB, DATE, TIMESTAMP (all Standard SQL) and
TINYINT, DATETIME (non Standard). UNSIGNED types are converted to signed.
MySQL uses a non-standard expression, SELECT 'A Test String' to return a single row table. The standard
form is VALUES('A Test String'). In MYS syntax mode, this type of SELECT is supported.
Indexes defined inside CREATE TABLE statements are accepted and created. The index names must be unique
within the schema.
HyperSQL supports ON UPDATE CURRENT_TIMESTAMP for column definitions.
HyperSQL translates INSERT IGNORE, REPLACE and ON DUPLICATE KEY UPDATE variations of INSERT
into predictable and error-free operations.
When INSERT IGNORE is used, if any of the inserted rows would violate a PRIMARY KEY or UNIQUE constraint,
that row is not inserted. The rest of the rows are then inserted only if there is no other violation such as long strings
or type mismatch, otherwise the appropriate error is returned.
When REPLACE or ON DUPLICATE KEY UPDATE is used, the rows that need replacing or updating are updated
with the given values. This works exactly like an UPDATE statement for those rows. Referential constraints and
other integrity checks are enforced and update triggers are activated. The row count returned is simply the total
number of rows inserted and updated.
MySQL user-defined function and procedure syntax is very similar to SQL Standard syntax. A few changes may
still be required.
240
Firebird Compatibility
Firebird generally follows the SQL Standard. Applications can be ported to HyperSQL without difficulty.
Oracle Compatibility
Recent versions of Oracle support Standard SQL syntax for outer joins and many other operations. In addition,
HyperSQL features a setting to support Oracle syntax and semantics for the most widely used non-standard features.
Use SET DATABASE SQL SYNTAX ORA TRUE or the equivalent URL property sql.syntax_ora=true
to enable support for some non-standard syntax of Oracle.
Use MVCC with SET DATABASE TRANSACTION CONTROL MVCC if your application is multi-user.
Fine control over MVCC deadlock avoidance is provided by the SET DATABASE TRANSACTION ROLLBACK
ON CONFLICT FALSE and the corresponding hsqldb.tx_conflict_rollback connection property.
If your application relies on Oracle behaviour for nulls in multi-column UNIQUE constraints, use SET DATABASE
SQL UNIQUE NULLS FALSE to change the default.
If your application relies on Oracle behaviour for ordering of nulls in SELECT statements with ORDER BY, but
without NULLS FIRST or NULLS LAST, use both SET DATABASE SQL NULLS FIRST FALSE and SET
DATABASE SQL NULLS ORDER FALSE to change the defaults.
If you use the non-standard concatenation of possibly NULL values in your select statements, you may need to
change the setting for SET DATABASE SQL CONCAT NULLS FALSE.
Many Oracle functions are supported, including no-arg functions such as SYSDATE and SYSTIMESTAMP and
more complex ones such as TO_DATE and TO_CHAR.
Non-standard data type definitions such as NUMBER, VARCHAR2, NVARCHAR2, BINARY_DOUBLE,
BINARY_FLOAT, LONG, RAW are translated into the closest HyperSQL / SQL Standard equivalent in ORA
mode.
Non-standard DEFAULT definitions for columns, such as the use of DUAL with a SEQUENCE function are
supported and translated in ORA syntax mode.
The DATE type is interpreted as TIMESTAMP(0) in ORA syntax mode.
The DUAL table and the expressions, ROWNUM, CURRVAL, NEXTVAL are supported in ORA syntax mode.
HyperSQL natively supports operations involving datetime and interval values. These features are based on the
SQL Standard.
Many subtle automatic type conversions, syntax refinements and other common features are supported.
241
SQL routines are generally portable, but some changes are required.
DB2 Compatibility
DB2 is highly compatible with the SQL Standard (except for its lack of support for the INFORMATION_SCHEMA).
Applications can be ported to HyperSQL without difficulty.
Use SET DATABASE SQL SYNTAX DB2 TRUE or the equivalent URL property sql.syntax_db2=true
to enable support for some non-standard syntax of DB2.
Use MVCC with SET DATABASE TRANSACTION CONTROL MVCC if your application is multi-user.
HyperSQL supports almost the entire syntax of DB2 together with many of the functions. Even local temporary
tables using the SESSION pseudo schema are supported.
The DB2 binary type definition FOR BIT DATA, as well as empty definition of column default values are supported
in DB2 syntax mode.
Many DB2 functions are supported.
The DUAL table and the expressions, ROWNUM, CURRVAL, NEXTVAL are supported in DB2 syntax mode.
SQL routines are highly portable with minimal change.
242
Connection URL
The normal method of accessing a HyperSQL catalog is via the JDBC Connection interface. An introduction to
different methods of providing database services and accessing them can be found in the SQL Language chapter.
Details and examples of how to connect via JDBC are provided in our JavaDoc for JDBCConnection .
A uniform method is used to distinguish between different types of connection. The common driver identifier is
jdbc:hsqldb: followed by a protocol identifier (mem: file: res: hsql: http: hsqls: https:) then
followed by host and port identifiers in the case of servers, then followed by database identifier. Additional property /
value pairs can be appended to the end of the URL, separated with semicolons.
Database Example
jdbc:hsqldb:mem:
not available
accounts
Lowercase, single-word identifier creates the in-memory database when the first connection is made. Subsequent
use of the same Connection URL connects to the existing DB.
The old form for the URL, jdbc:hsqldb:. creates or connects to the same database as the new form for the
URL, jdbc:hsqldb:mem:.
Database Example
jdbc:hsqldb:file:
not available
accounts
/opt/db/accounts
C:/data/mydb
The file path specifies the database files. It should consist of a relative or absolute path to the directory containing
the database files, followed by a '/' and the database name. In the above examples the first one refers to a set of
mydb.* files in the directory where the javacommand for running the application was issued. The second and
third examples refer to absolute paths on the host machine: For example, files named accounts.* in the directory /
opt/db for the accounts database.
Database Example
jdbc:hsqldb:res:
not available
/adirectory/dbname
243
Properties
Database Example
Database files can be loaded from one of the jars specified as part of the Java command the same way as resource
files are accessed in Java programs. The /adirectory above stands for a directory in one of the jars.
Database Example
jdbc:hsqldb:hsql:
jdbc:hsqldb:hsqls:
jdbc:hsqldb:http:
jdbc:hsqldb:https:
//localhost
/an_alias
//192.0.0.10:9500
/enrolments
//dbserver.somedomain.com /quickdb
The host and port specify the IP address or host name of the server and an optional port number. The database to
connect to is specified by an alias. This alias is a lowercase string defined in the server.properties file to
refer to an actual database on the file system of the server or a transient, in-memory database on the server. The
following example lines in server.properties or webserver.properties define the database aliases
listed above and accessible to clients to refer to different file and in-memory databases.
The old form for the server URL, e.g., jdbc:hsqldb:hsql//localhost connects to the same database as
the new form for the URL, jdbc:hsqldb:hsql//localhost/ where the alias is a zero length string.
If the database URL contains a string in the form of ${propname} then the sequence of characters is replaced with
the system property with the given name. For example you can use this in the URL of a database that is used in a web
application and define the system property, "propname" in the web application properties. In the example below, the
string ${mydbpath} is replaced with the value of the property, mydbpath
jdbc:hsqldb:file:${mydbpath}
244
Properties
Default
Description
user
SA
user name
password
For compatibility with other engines, a non-standard form of specifying user and password is also supported. In
this form, user name and password appear at the end of the URL string, prefixed respectively with the question
mark and the ampersand:
jdbc:hsqldb:file:enrolments;create=false?user=aUserName&password=3xLVz
Default
Description
close_result
false
This property is used for compatibility with the JDBC specification. When true (the JDBC specification), a
ResultSet that was previously returned by executing a Statement or PreparedStatement is closed as
soon as the Statement is executed again.
The default is false as previous versions of HSQLDB did not close old result set. The user application should close
old result sets when they are no longer needed and should not rely on auto-closing side effect of executing the
Statement.
The default is false. When the property is true, the old ResultSet is closed when a Statement is re-executed.
Example below:
jdbc:hsqldb:hsql://localhost/enrolments;close_result=true
When a ResultSet is used inside a user-defined stored procedure, the default, false, is always used for this
property.
Default
Description
get_column_name
true
This property is used for compatibility with other JDBC driver implementations. When true (the default),
ResultSet.getColumnName(int c) returns the underlying column name. This property can be specified
differently for different connections to the same database.
The default is true. When the property is false, the above method returns the same value as
ResultSet.getColumnLabel(int column) Example below:
245
Properties
Name
Default
Description
jdbc:hsqldb:hsql://localhost/enrolments;get_column_name=false
When a ResultSet is used inside a user-defined stored procedure, the default, true, is always used for this
property.
Default
Description
allow_empty_batch
false
This property is used for compatibility with other JDBC driver implementations such as the PostgreSQL driver. By
default PreparedStatement.executeBatch() throws an exception if addBatch() has not been called at
all. Setting this property to true ignores the empty batch and returns an empty int[]. This property can be specified
differently for different connections to the same database.
The default is false. Example below:
jdbc:hsqldb:hsql://localhost/enrolments;allow_empty_batch=true
When a PreparedStatement is used inside a user-defined stored procedure, the default, false, is always used
for this property.
Default
Description
ifexists
false
Has an effect only with mem: and file: database. When true, will not create a new database if one does not already
exist for the URL.
When the property is false (the default), a new mem: or file: database will be created if it does not exist.
Setting the property to true is useful when troubleshooting as no database is created if the URL is malformed.
Example below:
jdbc:hsqldb:file:enrolments;ifexists=true
create
true
Default
Description
shutdown
false
246
Properties
Name
Default
Description
If this property is true, when the last connection to a database is closed, the database is automatically shut down.
The property takes effect only when the first connection is made to the database. This means the connection that
opens the database. It has no effect if used with subsequent connections.
This command has two uses. One is for test suites, where connections to the database are made from one JVM
context, immediately followed by another context. The other use is for applications where it is not easy to
configure the environment to shutdown the database. Examples reported by users include web application servers,
where the closing of the last connection coincides with the web app being shut down.
jdbc:hsqldb:file:enrolments;shutdown=true
In addition, when the first connection to an in-process file: or mem: database creates a new database all the userdefined database properties can be specified as URL properties. See the next section for details.
In the table below, database properties that can be used as part of the URL or in connection properties are listed. For
each property that can also be set with an SQL statement, the statement is also given. These statements are described
more extensively in the System Management chapter.
Default
Description
check_props
false
If the property is true, every database property that is specified on the URL or in connection properties is checked
and if it is not used correctly, an error is returned.
this property cannot be set with an SQL statement
247
Properties
Default
Description
sql.enforce_names
false
This property, when set true, prevents SQL keywords being used for database object names such as columns and
tables.
SET DATABASE SQL NAMES { TRUE | FALSE }
Table 13.13. SQL Keyword Starting with the Underscore or Containing Dollar Characters
Name
Default
Description
sql.regular_names
true
This property, when set true, prevents database object names such as columns and tables beginning with the
underscore or containing the dollar character.
SET DATABASE SQL REGULAR NAMES { TRUE | FALSE }
Default
Description
sql.enforce_refs
false
This property, when set true, causes an error when an SQL statement (usually a select statement) contains column
references that can be resolved by more than one table name or alias. In effect forces such column references to
have a table name or table alias qualifier.
SET DATABASE SQL REFERENCES { TRUE | FALSE }
Default
Description
sql.enforce_size
true
Conforms to SQL standards for size and precision of data types. When true, all VARCHAR column type
declarations require a size. When the property is false and there is no size in the declaration, a default size is used.
Note that all other types accept a declaration without a size, which is interpreted as a default size.
SET DATABASE SQL SIZE { TRUE | FALSE }
Default
Description
sql.enforce_types
false
This property, when set true, causes an error when an SQL statements contains comparisons or assignments that are
non-standard due to type mismatch. Most illegal comparisons and assignments will cause an exception regardless
of this setting. This setting applies to a small number of comparisons and assignments that are possible, but not
standard conformant, and were allowed in previous versions of HSQLDB.
248
Properties
Name
Default
Description
Default
Description
sql.enforce_tdc_delete
true
The ON DELETE and ON UPDATE clauses of constraints cause data changes in rows in different tables or the
same table. When there are multiple constraints, a row may be updated by one constraint and deleted by another
constraint in the same operation. This is not allowed by default. Changing this property to false allows such
violations of the Standard to pass without an exception. Used for porting from database engines that do not enforce
the constraints.
SET DATABASE SQL TDC DELETE { TRUE | FALSE }
sql.enforce_tdc_update
true
The ON DELETE and ON UPDATE clauses of foreign key constraints cause data changes in rows in different
tables or the same table. With multiple constraint, a field may be updated by two constraints and set to different
values. This is not allowed by default. Changing this property to false allows such violations of the Standard to
pass without an exception. Used for porting from database engines that do not enforce the constraints properly.
SET DATABASE SQL TDC UPDATE { TRUE | FALSE }
Default
Description
sql.longvar_is_lob
false
This property, when set true, causes type declarations using LONGVARCHAR and LONGVARBINARY to be
translated to CLOB and BLOB respectively. By default, they are translated to VARCHAR and VARBINARY.
SET DATABASE SQL LONGVAR IS LOB { TRUE | FALSE }
Default
Description
sql.char_literal
true
This property, when set false, sets the type of all string literal to VARCHAR, as opposed to CHARACTER. This
results in strings not being padded with spaces by CASE WHEN expressions.
SET DATABASE SQL CHARACTER LITERAL { TRUE | FALSE }
Default
Description
sql.concat_nulls
true
This property, when set false, causes the concatenation of a null and a not null value to return the not null value. By
default, it returns null.
SET DATABASE SQL CONCAT NULLS { TRUE | FALSE }
249
Properties
Default
Description
sql.unique_nulls
true
This property, when set false, causes multi-column unique constrains to be more restrictive for value sets that
contain a mix of null and not null values.
SET DATABASE SQL UNIQUE NULLS { TRUE | FALSE }
Default
Description
sql.convert_trunc
true
This property, when set false, causes type conversions from DOUBLE to any integral type to use rounding. By
default truncation is used.
SET DATABASE SQL CONVERT TRUNCATE { TRUE | FALSE }
Default
Description
sql.avg_scale
By default, the result of a division or an AVG or MEDIAN aggregate has the same type and scale as the aggregated
value. For INTEGER types, the scale is 0. When this property is set to a value other than the default 0, then
the scale is used if it is greater than the scale of the divisor or aggregated value. This property does not affect
DOUBLE values. Values between 0 - 10 can be used for this property.
SET DATABASE SQL AVG SCALE <numeric value>
Default
Description
sql.double_nan
true
This property, when set false, causes division of DOUBLE values by Zero to return a Double.NaN value. By
default an exception is thrown.
SET DATABASE SQL DOUBLE NAN { TRUE | FALSE }
Default
Description
sql.nulls_first
true
By default, nulls appear before not-null values when a result set is ordered without specifying NULLS FIRST or
NULLS LAST. This property, when set false, causes nulls to appear by default after not-null values in result sets
with ORDER BY
SET DATABASE SQL NULLS FIRST { TRUE | FALSE }
250
Properties
Default
Description
sql.nulls_order
true
By default, when an ORDER BY clause that does not specify NULLS FIRST or NULLS LAST is used, nulls are
ordered according to the sql.nulls_first setting even when DESC is used after ORDER BY. This property,
when set false, causes nulls to appear in the opposite position when DESC is used.
SET DATABASE SQL NULLS ORDER { TRUE | FALSE }
Default
Description
sql.pad_space
true
By default, when two strings are compared, he shorter string is padded with spaces before comparison. When this
property is set false, no padding takes place before comparison. Without padding, the shorter string is never equal
to the longer one.
Before version 2.0, HSQLDB used NO PAD comparison. If you need the old behaviour, use this property when
opening an older database.
SET DEFAULT COLLATION <collation name> [ NO PAD | PAD SPACE ]
Default
Description
sql.ignore_case
false
case-insensitive VARCHAR
When this propery is set true, all VARCHAR declarations in CREATE TABLE and other statemenst are assigned
an Upper Case Comparison collation, SQL_TEXT_UCC. This is designed for compatibility with some databases
that use case-insensitive comparison. It is better to specify the collation selectively for specific columns that require
it.
SET DATABASE COLLATION SQL_TEXT_UCC
Default
Description
sql.live_object
false
By default when Java Objects are stored in a column of type OTHER, the objects are serialized. Setting this
property to true results in the Object to be stored without serialization. This option is available in mem: database
only.
SET DATABASE LIVE OBJECT
Default
Description
sql.syntax_db2
false
This property, when set true, allows compatibility with some aspects of this dialect.
251
Properties
Name
Default
Description
Default
Description
sql.syntax_mss
false
This property, when set true, switches the arguments of the CONVERT function and also allow compatibility with
some other aspects of this dialect.
SET DATABASE SQL SYNTAX MSS { TRUE | FALSE }
Default
Description
sql.syntax_mys
false
This property, when set true, enables support for TEXT and AUTO_INCREMENT types and also allow
compatibility with some other aspects of this dialect.
SET DATABASE SQL SYNTAX MYS { TRUE | FALSE }
Default
Description
sql.syntax_ora
false
This property, when set true, enables support for non-standard types. It also enables DUAL, ROWNUM,
NEXTVAL and CURRVAL syntax and and also allow compatibility with some other aspects of this dialect.
SET DATABASE SQL SYNTAX ORA { TRUE | FALSE }
Default
Description
sql.syntax_pgs
false
This property, when set true, enables support for TEXT and SERIAL types. It also enables NEXTVAL,
CURRVAL and LASTVAL syntax and also allow compatibility with some other aspects of this dialect.
SET DATABASE SQL SYNTAX PGS { TRUE | FALSE }
Default
Description
hsqldb.default_table_type
The CREATE TABLE command results in a MEMORY table by default. Setting the value cached for this property
will result in a cached table by default. The qualified forms such as CREATE MEMORY TABLE or CREATE
CACHED TABLE are not affected at all by this property.
252
Properties
Name
Default
Description
Default
Description
hsqldb.tx
locks
Indicates the transaction control mode for the database. The values, locks, mvlocks and mvcc are allowed.
SET DATABASE TRANSACTION CONTROL { LOCKS | MVLOCKS | MVCC }
Default
Description
hsqldb.tx_level
read_commited
database default transaction isolation level
Indicates the default transaction isolation level for each new session. The values, read_committed and serializable
are allowed. Individual sessions can change their isolation level.
SET DATABASE DEFAULT ISOLATION LEVEL { READ COMMITTED | SERIALIZABLE }
Default
Description
hsqldb.tx_conflict_rollback
true
When a transaction deadlock or other unresolvable conflict is about to happen, the current transaction is rolled
back and an exception is raised. When this property is set false, the transaction is not rolled back. Only the latest
action that would cause the conflict is undone and an error is returned. The property should not be changed unless
the application can quickly perform an alternative statement and complete the transaction. It is provided for
compatibility with other database engines which do not roll back the transaction upon deadlock.
SET DATABASE TRANSACTION ROLLBACK ON CONFLICT { TRUE | FALSE }
Default
Description
hsqldb.translate_tti_types
true
If the property is true, the TIME / TIMESTAMP WITH TIME ZONE types and INTERVAL types are represented
in JDBC methods of ResultSetMetaData and DatabaseMetaData as JDBC datetime types without time
zone and the VARCHAR type respectively. The original type names are preserved.
SET DATABASE SQL TRANSLATE TTI TYPES { TRUE | FALSE }
Default
Description
readonly
false
253
Properties
Name
Default
Description
This property is a special property that can be added manually to the .properties file, or included in the URL or
connection properties. When this property is true, the database becomes readonly. This can be used with an existing
database to open it for readonly operation.
this property cannot be set with an SQL statement - it can be used in the .properties file
Default
Description
files_readonly
false
This property is used similarly to the hsqldb.readonly property. When this property is true, CACHED and TEXT
tables are readonly but memory tables are not. Any change to the data is not persisted to database files.
this property cannot be set with an SQL statement - it can be used in the .properties file
Default
Description
hsqldb.large_data
false
By default, up to 2 billion rows can be stored in disk-based CACHED tables. Setting this property to true increases
the limit to 256 billion rows. This property is used as a connection property.
this property cannot be set with an SQL statement - it can be used as a connection property for
the connection that opens the database
Default
Description
hsqldb.applog
The default level 0 indicates no logging. Level 1 results in minimal logging, including any failures. Level 2
indicates all events, including ordinary events. LEVEL 3 adds details of some of the normal operations. The events
are logged in a file ending with ".app.log".
SET DATABASE EVENT LOG LEVEL { 0 | 1 | 2 | 3}
Default
Description
hsqldb.sqllog
The default level 0 indicates no logging. Level 1 logs only commits and rollbacks. Level 2 logs all the SQL
statements executed, together with their parameter values. Long statements and parameter values are truncated.
Level 3 is similar to Level 2 but does not truncate long statements and values. The events are logged in a file
ending with ".sql.log". This property applies to existing file: databases as well as new databases.
SET DATABASE EVENT LOG SQL LEVEL { 0 | 1 | 2 | 3}
254
Properties
Default
Description
hsqldb.result_max_memory_rows
This property can be set to specify how many rows of each results or temporary table are stored in memory before
the table is written to disk. The default is zero and means data is always stored in memory. If this setting is used, it
should be set above 1000.
SET DATABASE DEFAULT RESULT MEMORY ROWS <numeric value>
Default
Description
hsqldb.cache_free_count
512
The default indicates 512 unused spaces are kept for later use. The value can range between 0 - 8096.
When rows are deleted, the space is recovered and kept for reuse for new rows. If too many rows are deleted, the
smaller recovered spaces are lost and the largest ones are retained for later use. Normally there is no need to set this
property.
this property cannot be set with an SQL statement
Default
Description
hsqldb.cache_rows
50000
Indicates the maximum number of rows of cached tables that are held in memory.
The value can range between 100- 4 billion. If the value is set via SET FILES CACHE ROWS then it becomes
effective after the next database SHUTDOWN.
SET FILES CACHE ROWS <numeric value>
Default
Description
hsqldb.cache_size
10000
Indicates the total size (in kilobytes) of rows in the memory cache used with cached tables. This size is calculated
as the binary size of the rows, for example an INTEGER is 4 bytes. The actual memory size used by the objects is
2 to 4 times this value. This depends on the types of objects in database rows, for example with binary objects the
factor is less than 2, with character strings, the factor is just over 2 and with date and timestamp objects the factor is
over 3.
The value can range between 100 KB - 4 GB. The default is 10,000, representing 10,000 kilobytes. If the value is
set via SET FILES then it becomes effective after the next database SHUTDOWN or CHECKPOINT.
SET FILES CACHE SIZE <numeric value>
255
Properties
Default
Description
hsqldb.cache_file_scale
32
The default value corresponds to a maximum size of 64 GB for the .data file. This can be increased to 64, 128,
256, 512, or 1024 resulting in up to 2 TB GB storage. Settings below 32 in older databases are preserved until a
SHUTDOWN COMPACT.
SET FILES SCALE <numeric value>
Default
Description
hsqldb.lob_file_scale
32
The default value represents units of 32KB. When the average size of individual lobs in the database is smaller, a
smaller unit can be used to reduce the overall size of the .lobs file. Values 1, 2, 4, 8, 16, 32 can be used.
SET FILES LOB SCALE <numeric value>
Default
Description
hsqldb.lob_compressed
false
The default value is false, indicating no compression. When the value is true at the time of creation of a new
database, blobs and clobs are stored as compressed parts.
SET FILES LOB COMPRESSED { TRUE | FALSE }
Default
Description
hsqldb.inc_backup
true
During updates, the contents of the .data file are modified. When this property is true, the modified contents are
backed up gradually. This causes a marginal slowdown in operations, but allows fast checkpoint and shutdown.
When the property is false, the .data file is backed up entirely at the time of checkpoint and shutdown. Up to
version 1.8, HSQLDB supported only full backup.
SET FILES BACKUP INCREMENT { TRUE | FALSE }
Default
Description
hsqldb.lock_file
true
By default, a lock file is created for each file database that is opened for read and write. This property can be
specified with the value false to prevent the lock file from being created. This usage is not recommended but
may be desirable when flash type storage is used. This property applies to existing file: databases as well as new
databases.
this property cannot be set with an SQL statement
256
Properties
Default
Description
hsqldb.log_data
true
This property can be set to false when database recovery in the event of an unexpected crash is not necessary.
A database that is used as a temporary cache is an example. Regardless of the value of this property, if there is
a proper shutdown of the database, all the changed data is stored. A checkpoint or shutdown still rewrites the
.script file and saves the .backup file according to the other settings.
SET FILES LOG
{ TRUE | FALSE }
Default
Description
hsqldb.log_size
50
The value is the size (in megabytes) that the .log file can reach before an automatic checkpoint occurs. A
checkpoint rewrites the .script file and clears the .log file.
SET FILES LOG SIZE <numeric value>
Default
Description
hsqldb.defrag_limit
When a checkpoint is performed, the percentage of wasted space in the .data file is calculated. If the wasted
space is above the specified limit, a defrag operation is performed. The default is 0, which means no automatic
checkpoint. The numeric value must be between 0 and 100 and is interpreted as a percentage of the current size of
the .data file.
SET FILES DEFRAG <numeric value>
Default
Description
hsqldb.script_format
If the property is set with the value 3, the .script file is stored in compressed format. This is useful for large script
files. The .script is no longer readable when the hsqldb.script_format=3 has been used.
This property cannot be set with an SQL statement
Default
Description
hsqldb.write_delay
true
If the property is true, the default WRITE DELAY property of the database is used, which is 500 milliseconds. If
the property is false, the WRITE DELAY is set to 0 seconds. The log is written to file regardless of this property.
The property controls the fsync that forces the written log to be persisted to disk. The SQL command for this
property allows more precise control over the property.
SET FILES WRITE DELAY {{ TRUE | FALSE } | <seconds value> | <milliseconds value> MILLIS
257
Properties
Default
Description
hsqldb.write_delay_millis
500
If the property is used, the WRITE DELAY property of the database is set the given value in milliseconds. The
property controls the fsync that forces the written log to be persisted to disk. The SQL command for this property
allows the same level of control over the property.
SET FILES WRITE DELAY {{ TRUE | FALSE } | <seconds value> | <milliseconds value> MILLIS
Default
Description
hsqldb.nio_data_file
true
Setting this property to false will avoid the use of nio access methods, resulting in somewhat reduced speed.
If the data file is larger than hsqldb.nio_max_size (default 256MB) when it is first opened (or when its size is
increased), nio access methods are not used. Also, if the file gets larger than the amount of available computer
memory that needs to be allocated for nio access, non-nio access methods are used.
SET FILES NIO { TRUE | FALSE }
Default
Description
hsqldb.nio_max_size
256
The maximum size of .data file in mega bytes that can use the nio access method. When the file gets larger than
this limit, non-nio access methods are used. Values 64, 128, 256, 512, 1024, and larger multiples of 512 can be
used. The default is 256MB.
SET FILES NIO SIZE <numeric value>
Default
Description
hsqldb.full_log_replay
false
The .log file is processed during recovery after a forced shutdwon. Out of memory conditions always abort the
startup. Any other exception stops the processing of the .log file and by default, continues the startup process.
If this property is true, the startup process is stopped if any exception occurs. Exceptions are usually caused by
incomplete lines of SQL statements near the end of the .log file, which were not fully synced to disk when an
abnormal shutdown occurred.
This property cannot be set with an SQL statement
Default
Description
textdb.*
Properties that override the database engine defaults for newly created text tables. Settings in the text table SET
<tablename> SOURCE <source string> command override both the engine defaults and the database
properties defaults. Individual textdb.* properties are listed in the Text Tables chapter.
258
Properties
Default
Description
runtime.gc_interval
This setting forces garbage collection each time a set number of result set row or cache row objects are created. The
default, "0" means no garbage collection is forced by the program.
SET DATABASE GC <numeric value>
Crypt Properties
Table 13.65. Crypt Property For LOBs
Name
Default
Description
crypt_lobs
false
encryption of lobs
If the property is true, the contents of the .lobs file is encrypted as well.
this property cannot be set with an SQL statement
Default
Description
crypt_key
none
encryption
Default
Description
crypt_provider
none
encryption
The fully-qualified class name of the cryptography provider. This property is not used for the default security
provider.
this property cannot be set with an SQL statement
Default
Description
crypt_type
none
encryption
When connecting to an in-process database creates a new database, or opens an existing database (i.e. it is the first
connection made to the database by the application), all the user-defined database properties listed in this section can
be specified as URL properties.
When HSQLDB is used with OpenOffice.org as an external database, the property "default_schema=true" must be set
on the URL, otherwise the program will not operate correctly as it does with its built-in hsqldb instance.
259
Properties
System Properties
A few system properties are used by HyperSQL. These are set on the Java command line or by calling
System.setProperty() from the user's program. They are not valid as URL or connection properties.
Default
Description
hsqldb.reconfig_logging
true
Setting this system property false avoids reconfiguring the framework logging system such as Log4J or
java.util.Logging. If the property does not exist or is true, reconfiguration takes place.
Default
Description
textdb.allow_full_path
false
Setting this system property true allows text table sources and files to be opened on all available paths. It also
allows pure mem: databases to open such files. By default, only the database directory and its subdirectories are
allowed. See the Text Tables chapter.
Default
Description
hsqldb.method_class_names
none
This property needs to be set with the names (including wildcards) of Java classes that can be used for routines
based on Java static methods. See the SQL Invoked Routines chapter.
260
Listeners
As described in the Running and Using HyperSQL chapter, network listeners (servers) provide connectivity to catalogs
from different JVM processes. The HyperSQL listeners support both ipv4 and ipv6 network addressing.
HyperSQL Server
This is the preferred way of running a database server and the fastest one. This mode uses the proprietary hsql:
communications protocol. The following example of the command for starting the server starts the server with one
(default) database with files named "mydb.*" and the public name (alias) of "xdb". Note the database property to set
the transaction mode to MVCC is appended to the database file path.
java -cp ../lib/hsqldb.jar org.hsqldb.server.Server --database.0 file:mydb;hsqldb.tx=mvcc -dbname.0 xdb
Alternatively, a server.properties file can be used for passing the arguments to the server. This file must be located
in the directory where the command is issued.
java -cp ../lib/hsqldb.jar org.hsqldb.server.Server
Alternatively, you can specify the path of the server.properties file on the command line. In this case, the properties
file can have any name or extension, but it should be a valid properties file.
java -cp ../lib/hsqldb.jar org.hsqldb.server.Server --props myserver.props
261
Default
Description
server.database.0
file:test
the catalog type, path and file name of the first database
file to use
server.dbname.0
""
server.database.n
NO DEFAULT
the catalog type, path and file name of the n'th database
file in use
server.dbname.n
NO DEFAULT
server.silent
true
server.trace
false
server.address
NO DEFAULT
IP address of server
server.tls
false
server.daemon
false
262
Default
Description
server.remote_open
false
In HyperSQL version 2.0, each server can serve an unlimited number of databases simultaneously. The
server.database.0 property defines the filename / path whereas the server.dbname.0 defines the lowercase alias used
by clients to connect to that database. The digit 0 is incremented for the second database and so on. Values for
the server.database.n property can use the mem:, file: or res: prefixes and connection properties as discussed under
CONNECTIONS. For example,
database.0=mem:temp;sql.enforce_strict_size=true;
Default
Description
server.port
9001 (normal)
or 554 (if TLS
encrypted)
server.no_system_exit
true
Default
Description
server.port
server.default_page
index.html
server.root
./
.<extension>
NO DEFAULT
In the above example, the server.properties file indicates that the server provides access to 3 different
databases. Two of the databases are file-based, while the third is all-in-memory. The aliases for the databases that the
users connect to are accounts, enrolments and quickdb.
All the above properties and their values can be specified on the command line to start the server by omitting the
server. prefix. If a property/value pair is specified on the command line, it overrides the property value specified
in the server.properties or webserver.properties file.
263
Note
Upgrading: If you have existing custom properties files, change the values to the new naming convention.
Note the use of digits at the end of server.database.n and server.dbname.n properties.
The Server object has several alternative methods for setting databases and their public names. The server should be
shutdown using the shutdown() method.
264
The specified properties apply only to a new database. They have no effect on an existing database apart from a few
properties such as readonly listed in the Properties chapter.
TLS Encryption
Listener TLS Support (a. k. a. SSL)
Blaine Simpson, The HSQL Development Group
$Revision: 5639 $
2016-05-15 15:57:21-0400
This section explains how to encrypt the stream between JDBC network clients and HyperSQL Listeners. If you are
running an in-process (non-Listener) setup, this chapter does not apply to you.
Requirements
Hsqldb TLS Support Requirements
Java 4 and greater versions support JSSE.
A JKS keystore containing a private key , in order to run a Listener.
If you are running the listener side, then you'll need to run a HSQLDB Server or WebServer Listener instance. It
doesn't matter if the underlying database catalogs are new, and it doesn't matter if you are making a new Listener
configuration or encrypting an existing Listener configuration. (You can turn encryption on and off at will).
You need a HSQLDB jar file that was built with JSSE present. If you obtained your HSQLDB distribution from us,
you are all set, because we build with Java 1.4 or later (which contains JSSE).
Client-Side
Just use one of the following protocol prefixes.
265
In this example, server.cer is the X509 certificate that you need for the next step.
Now, you need to add this cert to one of the system trust keystores or to a keystore of your own. See the
Customizing Stores section in JSSERefGuide.html [http://java.sun.com/javase/6/docs/technotes/guides/security/jsse/
JSSERefGuide.html#CustomizingStores] to see where your system trust keystores are. You can put private keystores
anywhere you want to. The following command will add the cert to an existing keystore, or create a new keystore if
client.store doesn't exist.
If you are making a new keystore, you probably want to start with a copy of your system default keystore which you
can find somewhere under your JAVA_HOME directory (typically jre/lib/security/cacerts for a JDK, but
I forget exactly where it is for a JRE).
Unless your OS can't stop other people from writing to your files, you probably do not want to set a password on
the trust keystore.
If you added the cert to a system trust store, then you are finished. Otherwise you will need to specify your
custom trust keystore to your client program. The generic way to set the trust keystore is to set the system property
javax.net.ssl.trustStore every time that you run your client program. For example
This example runs the program SqlTool . SqlTool has built-in TLS support however, so, for SqlTool you can set
truststore on a per-urlid basis in the SqlTool configuration file.
266
Server-Side (Listener-Side)
Get yourself a
JKS keystore containing a private key . Then set properties server.tls,
system.javax.net.ssl.keyStore and system.javax.net.ssl.keyStorePassword in your
server.properties
or
webserver.properties
file.
Set
server.tls
to
true,
system.javax.net.ssl.keyStore to the path of the private key JKS keystore, and
system.javax.net.ssl.keyStorePassword to the password (of both the keystore and the private key
record-- they must be the same). If you specify relative file path values, they will be resolved relative to the
${user.dir} when the JRE is started.
Caution
If you set any password in a .properties (or any other) file, you need to restrict access to the file. On a
good operating system, you can do this like so:
chmod 600 path/to/server.properties
The values and behavior of the system.* settings above match the usage documented
javax.net.ssl.keyStorePassword and javax.net.ssl.keyStore in the JSSE docs.
for
Note
Before version 2.0, HyperSQL depended on directly setting the corresponding JSSE properties. The new
idiom is more secure and easier to manage. If you have an old password in a UNIX init script config
file, you should remove it.
CA-Signed Cert
I'm not going to tell you how to get a CA-signed SSL certificate. That is well documented at many other places.
Assuming that you have a standard pem-style private key certificate, here's how you can use openssl
www.openssl.org] and the program DERImport to get it into a JKS keystore.
Because I have spent a lot of time on this document already, I am just giving you an example.
267
[http://
Important
Make sure to set the password of the key exactly the same as the password for the keystore!
You need the program DERImport.class of course. Do some internet searches to find DERImport.java or
DERImport.class and download it.
If DERImport has become difficult to obtain, I can write a program to do the same thing-- just let me know.
Non-CA-Signed Cert
Run man keytool or see the Creating a Keystore section of JSSERefGuide.html [http://java.sun.com/javase/6/
docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore].
allow 2001:db8::/32
268
You put your file wherever it is convenient for you, and specify that path with the property server.acl or
webserver.acl in your server.properties or webserver.properties file (depending on whether
your listener instance is a Server or WebServer). You can specify the ACL file path with an absolute or relative
path. If you use a relative path, it must be relative to the .properties file. It's often convenient to name the ACL
file acl.txt, in the same directory as your .properties file and specify the property value as just acl.txt.
This file name is intuitive, and things will continue to work as expected if you move or copy the entire directory.
Warning
If your Server or WebServer was started with a *.acl property, changes afterwards to the ACL
file will be picked up immediately by your listener instance. You are advised to use the procedure below
to prevent partial edits or mistakes from crippling your running server.
When you edit your ACL file, it is both more convenient and more secure to test it as explained here before activating
it. You could, of course, test an ACL file by editing it in-place, then trying to connect to your listener with JDBC
clients from various source addresses. Besides being mightily laborious and boring, with this method it is very easy to
accidentally open access to all source addresses or to deny access to all users until you fix incorrect ACL entries.
The suggested method of creating or changing ACLs is to work with an inactive file (for new ACL files, just don't
enable the *.acl property yet; for changing an existing file, just copy it to a temporary file and edit the temporary
file). Then use the ServerAcl class to test it.
If the specified ACL file fails validation, you will be given details about the problem. Otherwise, the validated rules
will be displayed (including the implicit, default deny rules). You then type in host names and addresses, one-per-line.
Each name or address is tested as if it were a HyperSQL network client address, using the same exact method that the
HyperSQL listener will use. (HyperSQL listeners use this same ServerAcl class to test incoming source addresses).
ServerAcl will report the rule which matches and whether access is denied or allowed to that address.
If you have edited a copy of an existing ACL file (as suggested above), then overwrite your live ACL file with your
new, validated ACL file. I.e., copy your temp file over top of your live ACL file.
ServerAcl can be run in the same exact way described above, to troubleshoot runtime access issues. If you use an
ACL file and a user or application can't get a connection to the database, you can run ServerAcl to quickly and
definitively find if the client is being prohibited by an ACL rule.
269
Purpose
This chapter explains how to quickly install, run, and use a HyperSQL Listener (aka Server) on UNIX.
Note that, unlike a traditional database server, there are many use cases where it makes sense to run HyperSQL without
any listener. This type of setup is called in-process, and is not covered here, since there is no UNIX-specific setup
in that case.
I intend to cover what I think is the most common UNIX setup: To run a multi-user, externally-accessible catalog with
permanent data persistence. (By the latter I mean that data is stored to disk so that the catalog data will persist across
process shutdowns and startups). I also cover how to run the Listener as a system daemon.
When I give sample shell commands below, I use commands which will work in Bourne-compatible shells, including
Bash and Korn. Users who insist on using the inferior C-shells will need to convert.
Installation
Go to http://sourceforge.net/projects/hsqldb and click on the "files" link. You want the current version. I can't be more
specific because SourceForge/Geeknet are likely to continue changing their interface. See if there's a distribution for
the current HSQLDB version in the format that you want.
If you want a binary package and we either don't provide it, or you prefer somebody else's build, you should still find out
the current version of HyperSQL available at SourceForge. It's very likely that you can find a binary package for your
UNIX variant with your OS distributor, http://www.jpackage.org/ , http://sunfreeware.com/ , etc. Nowadays, most
UNIXes have software package management systems which check Internet repositories. Just search the repositories for
"hsqldb" and "hypersql". The challenge is to find an up-to-date package. You will get better features and support if you
work with the current stable release of HyperSQL. (In particular, HyperSQL version 2.0.0 added tons of new features).
Pay attention to what JVM versions your binary package supports. Our builds (version 2.0 and later) document the
Java version it was built with in the file doc/index.html, but you can't depend on this if somebody else assembled
your distribution. Java jar files are generally compatible with the same or greater major versions. For example,if your
hsqldb.jar was built with Java 1.3.6-11, then it is compatible with Java versions 1.3.* and greater.
Note
It could very well happen that some of the file formats which I discuss below are not in fact offered. If
so, then we have not gotten around to building them.
Binary installation depends on the package format that you downloaded.
Installing from a .pkg.Z file
This package is only for use by a Solaris super-user. It's a System V package.
Download then uncompress the package with uncompress or gunzip
uncompress filename.pkg.Z
270
HyperSQL on UNIX
You're on your own. I find everything much easier when I install software to
BSD without their package management systems.
Just skip this section if you know how to install an RPM. If you found the RPM
using a software management system, then just have it install it. The remainder
of item explains a generic command-line method which should work with any
Linux variant. After you download the rpm, you can read about it by running
rpm -qip /path/to/file.rpm
as root. Suse users may want to keep Yast aware of installed packages by
running rpm through Yast: yast2 -i /path/to/file.rpm.
Installing from a .zip file
Extract the zip file in an ancestor directory of the new HSQLDB home. You
don't need to create the HSQLDB_HOME directory because the extraction will
create a version-labelled directory, and the subdirectory "hsqldb". This "hsqldb"
directory is your HSQLDB_HOME, and you can move it to wherever you wish.
If you will be upgrading or maintaining multiple versions of HyperSQL, you
will want to retain the version number in the directory tree somehow.
cd ancestor/of/new/hsqldb/home
unzip /path/to/file.zip
All the files in the zip archive will be extracted to underneath a new subdirectory
named like hsqldb-2.0.2a/hsqldb.
Take a look at the files you installed. (Under hsqldb for zip file installations. Otherwise, use the utilities for
your packaging system). The most important file of the HyperSQL system is hsqldb.jar, which resides in the
subdirectory lib. Depending on who built your distribution, your file name may have a version label in it, like
hsqldb-1.2.3.4.jar.
Important
For the purposes of this chapter, I define HSQLDB_HOME to be the parent directory of the lib directory that
contains hsqldb.jar. E.g., if your path to hsqldb.jar is /a/b/hsqldb/lib/hsqldb.jar,
then your HSQLDB_HOME is /a/b/hsqldb.
Furthermore, unless I state otherwise, all local file paths that I give are relative to the HSQLDB_HOME.
If the description of your distribution says that the hsqldb.jar file will work for your Java version, then you are
finished with installation. Otherwise you need to build a new hsqldb.jar file.
If you followed the instructions above and you still don't know what Java version your hsqldb.jar supports, then
try reading documentation files like readme.txt, README.TXT, INSTALL.txt etc. (As I said above, our newer
271
HyperSQL on UNIX
distributions always document the Java version for the build, in the file doc/index.html). If that still doesn't help,
then you can just try your hsqldb.jar and see if it works, or build your own.
To use the supplied hsqldb.jar, just skip to the
hsqldb.jar.
If you don't already have Ant, download the latest stable binary version from http://ant.apache.org . cd to where
you want Ant to live, and extract from the archive with
unzip /path/to/file.zip
or
tar -xzf /path/to/file.tar.gz
or
bunzip2 -c /path/to/file.tar.bz2 | tar -xzf -
Everything will be installed into a new subdirectory named apache-ant- + version. You can rename the
directory after the extraction if you wish.
2.
Set the environmental variable JAVA_HOME to the base directory of your Java JRE or SDK, like
export JAVA_HOME; JAVA_HOME=/usr/java/j2sdk1.4.0
The location is entirely dependent upon your variety of UNIX. Sun's rpm distributions of Java normally install
to /usr/java/something. Sun's System V package distributions of Java (including those that come with
Solaris) normally install to /usr/something, with a sym-link from /usr/java to the default version (so
for Solaris you will usually set JAVA_HOME to /usr/java).
3.
4.
cd to HSQLDB_HOME /build. Make sure that the bin directory under your Ant home is in your search path.
Run the following command.
ant hsqldb
Select a UNIX user to run the database process (JVM) as. If this database is for the use of multiple users, or is a
production system (or to emulate a production system), you should dedicate a UNIX user for this purpose. In my
examples, I use the user name hsqldb. In this chapter, I refer to this user as the HSQLDB_OWNER, since that
user will own the database catalog files and the JVM processes.
272
HyperSQL on UNIX
If the account doesn't exist, then create it. On all system-5 UNIXes and most hybrids (including Linux), you can
run (as root) something like
useradd -c 'HSQLDB Database Owner' -s /bin/bash -m hsqldb
server.database.0=file:db0/db0
# I suggest that, for every file: catalog you define, you add the
# connection property "ifexists=true" after the database instance
# is created (which happens simply by starting the Server one time).
# Just append ";ifexists=true" to the file: URL, like so:
# server.database.0=file:db0/db0;ifexists=true
#
#
#
#
Since the value of the first database (server.database.0) begins with file:, the catalog will be persisted to a set
of files in the specified directory with names beginning with the specified name. Set the path to whatever you
want (relative paths will be relative to the directory containing the properties file). You can read about how to
specify other catalogs of various types, and how to make settings for the listen port and many other things in
other chapters of this guide.
3.
Set and export the environmental variable CLASSPATH to the value of HSQLDB_HOME (as described above)
plus "/lib/hsqldb.jar", like
export CLASSPATH; CLASSPATH=/path/to/hsqldb/lib/hsqldb.jar
This will start the Listener process in the background, and will create your new database catalog "db0". Continue
on when you see the message containing HSQLDB server... is online. nohup just makes sure that the
command will not quit when you exit the current shell (omit it if that's what you want to do).
273
HyperSQL on UNIX
Copy the file sample/sqltool.rc to the HSQLDB_OWNER's home directory. Use chmod to make the file
readable and writable only to HSQLDB_OWNER.
# $Id: sqltool.rc 5288 2013-09-29 18:35:42Z unsaved $
# This is a sample RC configuration file used by SqlTool, DatabaseManager,
# and any other program that uses the org.hsqldb.lib.RCData class.
# See the documentation for SqlTool for various ways to use this file.
# If you have the least concerns about security, then secure access to
# your RC file.
#
#
#
#
#
#
#
#
#
You can run SqlTool right now by copying this file to your home directory
and running
java -jar /path/to/sqltool.jar mem
This will access the first urlid definition below in order to use a
personal Memory-Only database.
"url" values may, of course, contain JDBC connection properties, delimited
with semicolons.
As of revision 3347 of SqlFile, you can also connect to datasources defined
here from within an SqlTool session/file with the command "\j urlid".
# You can use Java system property values in this file like this:
#
#
#
#
#
#
${user.home}
#
#
#
#
#
#
#
#
#
#
#
274
HyperSQL on UNIX
# You could use the thick driver instead of the thin, but I know of no reason
# why any Java app should.
#urlid cardiff2
#url jdbc:oracle:thin:@aegir.admc.com:1521:TRAFFIC_SID
# Thin SID URLs must specify both port and SID, there are no defaults.
# Oracle listens to 1521 by default, so that's what you will usually specify.
# But can alternatively use global service name (not tnsnames.ora service
# alias, in which case the port does default to 1521):
#url jdbc:oracle:thin:@centos.admc.com/tstsid.admc
#username blaine
#password secretpassword
#driver oracle.jdbc.OracleDriver
#
#
#
#
#
#
#
#urlid tls
#url jdbc:hsqldb:hsqls://db.admc.com:9001/lm2
#username BLAINE
#password asecret
#truststore ${user.home}/ca/db/db-trust.store
275
HyperSQL on UNIX
#username myuser
#password hiddenpwd
# Template for Microsoft SQL Server database using the JTDS Driver
# http://jtds.sourceforge.net Jar file has name like "jtds-1.2.5.jar".
# Port defaults to 1433.
# MSDN implies instances are port-specific, so can specify port or instname.
#urlid nlyte
#username myuser
#password hiddenpwd
#url jdbc:jtds:sqlserver://myhost/nlyte;instance=MSSQLSERVER
# Where database is 'nlyte' and instance is 'MSSQLSERVER'.
# N.b. this is diff. from MS tools and JDBC driver where (depending on which
# document you read), instance or database X are specified like HOSTNAME\X.
#driver net.sourceforge.jtds.jdbc.Driver
# Template for a Sybase database
#urlid sybase
#url jdbc:sybase:Tds:hostname:4100/dbname
#username blaine
#password hiddenpwd
# This is for the jConnect driver (requires jconn3.jar).
#driver com.sybase.jdbc3.jdbc.SybDriver
# Template for Embedded Derby / Java DB.
#urlid derby1
#url jdbc:derby:path/to/derby/directory;create=true
#username ${user.name}
#password any_noauthbydefault
#driver org.apache.derby.jdbc.EmbeddedDriver
# The embedded Derby driver requires derby.jar.
# There'a also the org.apache.derby.jdbc.ClientDriver driver with URL
# like jdbc:derby://<server>[:<port>]/databaseName, which requires
# derbyclient.jar.
# You can use \= to commit, since the Derby team decided (why???)
# not to implement the SQL standard statement "commit"!!
# Note that SqlTool can not shut down an embedded Derby database properly,
# since that requires an additional SQL connection just for that purpose.
# However, I've never lost data by not shutting it down properly.
# Other than not supporting this quirk of Derby, SqlTool is miles ahead of ij.
We will be using the "localhost-sa" sample urlid definition from the config file. The JDBC URL for this urlid is
jdbc:hsqldb:hsql://localhost. That is the URL for the default catalog of a HyperSQL Listener running
on the default port of the local host. You can read about URLs to connect to other catalogs with and without listeners
in other chapters of this guide.
Run SqlTool.
java -jar path/to/sqltool.jar localhost-sa
If you get a prompt, then all is well. If security is of any concern to you at all, then you should change the privileged
password in the database. Use the command SET PASSWORD command to change SA's password.
SET PASSWORD 'newpassword';
Note
If, like most UNIX System Administrators, you often need to make up strong passwords, I highly suggest
the great little program pwgen [https://sourceforge.net/projects/pwgen/] . You
can probably get it where you get your other OS packages. The command pwgen -1 is usually all
you need.
276
HyperSQL on UNIX
Note that with SQL-conformant databases like HyperSQL 2.0, user names and passwords are case sensitive. If you don't
quote the name, it will be interpreted as upper-case, like any named SQL object. (Only for backwards compatibility,
we do make an exception for the special user name SA, but you should always use upper-case "SA" nevertheless).
When you're finished playing, exit with the command \q.
If you changed the SA password, then you need to update the password in the sqltool.rc file accordingly.
You can, of course, also access the database with any JDBC client program. You will need to modify your classpath
to include hsqldb.jar as well as your client class(es). You can also use the other HSQLDB client programs, such
as org.hsqldb.util.DatabasManagerSwing, a graphical client with a similar purpose to SqlTool.
You can use any normal UNIX account to run the JDBC clients, including SqlTool, as long as the account has
read access to the sqltool.jar file and to an sqltool.rc file. See the Utilities Guide about where to put
sqltool.rc, how to execute sql files, and other SqlTool features.
Shutdown
Do a clean database shutdown when you are finished with the database catalog. You need to connect up as SA or some
other Admin user, of course. With SqlTool, you can run
java -jar path/to/sqltool.jar --sql 'shutdown;' localhost-sa
You don't have to worry about stopping the Listener because it shuts down automatically when all served database
catalogs are shut down.
277
HyperSQL on UNIX
The main purpose of the init script is to start up a Listener for the database catalogs specified in your
server.properties file; and to gracefully shut down these same catalogs. For each catalog defined by
a server.database.X setting in your .properties file, you must define an administrative "urlid" in your
sqltool.rc (these are used to access the catalogs for validation and shutdown purposes). Finally, you list the urlid
names in your init script config file. If, due to firewall issues, you want to run a WebServer instead of a Server, then
make sure you have a healthy WebServer with a webserver.properties set up, adjust your URLs in sqltool.rc, and
set TARGET_CLASS in the config file.
By following the commented examples in the config file, you can start up any number of Server and/or WebServer
listener instances with or without TLS encryption, and each listener instance can serve any number of HyperSQL
catalogs (independent data sets), all with optimal efficiency from a single JVM process. There are instructions in
the init script itself about how to run multiple, independently-configured JVM processes. Most UNIX installations,
however, will run a single JVM with a single Listener instance which serves multiple catalogs, for easier management
and more efficient resource usage.
After you have the init script set up, root can use it anytime to start or stop HSQLDB. (I.e., not just at system bootup
or shutdown).
2.
View your server.properties file. Make a note of every catalog define by a server.database.X
setting. A couple steps down, you will need to set up administrative access for each of these catalogs. If you are
using our sample server.properties file, you will just need to set up access for the catalog specified
with file:db0/dbo.
Note
Pre-2.0 versions of the hsqldb init script required use of .properties settings of the
formserver.urlid.X. These settings are obsolete and should be removed.
3.
Either copy HSQLDB_OWNER's sqltool.rc file into root's home directory, or set the value of AUTH_FILE
to the absolute path of HSQLDB_OWNER's sqltool.rc file. This file is read directly by root, even if you run
hsqldb as non-root (by setting HSQLDB_OWNER in the config file). If you copy the file, make sure to use chmod
278
HyperSQL on UNIX
to restrict permissions on the new copy. The init script will abort with an appropriate exhortation if you have the
permissions set incorrectly.
You need to set up a urlid stanza in your sqltool.rc file for network access (i.e. JDBC URL with hsql:, hsqls:,
http:, or https:) for each catalog in your server.properties file. For our example, you need to define a
stanza for the file:db0/db0 catalog. You must supply for this catalog, a hsql: JDBC URL, an administrative
user name, and the password.
4.
Look at the comment towards the top of the init script which lists recommended locations for the configuration
file for various UNIX platforms. Copy the sample config file sample/hsqldb.cfg to one of the listed
locations (your choice). Edit the config file according to the instructions in it. For our example, you will set the
value of URLIDS to localhostdb1, since that is the urlid name that we used in the sqltool.rc file.
# $Id: hsqldb.cfg 3583 2010-05-16 01:49:52Z unsaved $
# Sample configuration file for HyperSQL Server Listener.
# See the "HyperSQL on UNIX" chapter of the HyperSQL User Guide.
# N.b.!!!! You must place this in the right location for your type of UNIX.
# See the init script "hsqldb" to see where this must be placed and
# what it should be renamed to.
# This file is "sourced" by a Bourne shell, so use Bourne shell syntax.
#
#
#
#
This file WILL NOT WORK until you set (at least) the non-commented
variables to the appropriate values for your system.
Life will be easier if you avoid all filepaths with spaces or any other
funny characters. Don't ask for support if you ignore this advice.
# The URLIDS setting below is new and REQUIRED. This setting replaces the
# server.urlid.X settings which used to be needed in your Server's
# properties file.
# -- Blaine (blaine dot simpson at admc dot com)
JAVA_EXECUTABLE=/usr/bin/java
# Unless you copied the jar files from another system, this typically
# resides at $HSQLDB_HOME/lib/sqltool.jar, where $HSQLDB_HOME is your HSQLDB
# software base directory.
# The file name may actually have a version label in it, like
# sqltool-1.2.3.jar (in which case, you must specify the full name here).
# A 'hsqldb.jar' file (with or without version label) must reside in the same
# directory as the specified sqltool.jar file.
SQLTOOL_JAR_PATH=/opt/hsqldb-2.0.0/hsqldb/lib/sqltool.jar
# For the sample value above, there must also exist a file
# /opt/hsqldb-2.0.0/hsqldb/lib/hsqldb*.jar.
# Where the file "server.properties" or "webserver.properties" resides.
SERVER_HOME=/opt/hsqldb-2.0.0/hsqldb/data
# What UNIX user the server will run as.
# (The shutdown client is always run as root or the invoker of the init script).
# Runs as root by default, but you should take the time to set database file
# ownerships to another user and set that user name here.
HSQLDB_OWNER=hsqldb
279
HyperSQL on UNIX
For startup or shutdown failures, you can save a lot of debugging time by
temporarily adjusting down MAX_START_SECS and MAX_TERMINATE_SECS to a
little over what it should take for successful startup and shutdown on
your system.
280
HyperSQL on UNIX
# parameters if you run multiple instances of this class, since you can use the
# server/webserver.properties file for a single instance.
# Every additional class (in addition to the TARGET_CLASS)
# must be preceded with an empty string, so that MainInvoker will know
# you are giving a class name. MainInvoker will invoke the normal
# static main(String[]) method of each such class.
# By default, MainInvoker will just run TARGET_CLASS with no args.
# Example that runs just the TARGET_CLASS with the specified arguments:
#INVOC_ADDL_ARGS='-silent false'
#but use server.properties property instead!
# Example that runs the TARGET_CLASS plus a WebServer:
#INVOC_ADDL_ARGS='"" org.hsqldb.server.WebServer'
# Note the empty string preceding the class name.
# Example that starts TARGET_CLASS with an argument + a WebServer +
# your own application with its args (i.e., the HSQLDB Servers are
# "embedded" in your application). (Set SERVER_ADDL_CLASSPATH too).:
#INVOC_ADDL_ARGS='-silent false "" org.hsqldb.server.WebServer "" com.acme.Stone --env prod
localhost'
#
but use server.properties for -silent option instead!
# Example to run a non-TLS server in same JVM with a TLS server. In this
# case, TARGET_CLASS is Server which will run both in TLS mode by virtue of
# setting the tls, keyStore, and keyStorePassword settings in
# server*.properties, as described below; plus an "additional" Server with
# overridden 'tls' and 'port' settings:
#INVOC_ADDL_ARGS="'' org.hsqldb.server.Server --port 9002 --tls false"
# This is an important use case. If you run more than one Server instance,
# you can specify different parameters for each here, even though only one
# server.properties file is supported.
# Note that you use nested quotes to group arguments and to specify the
# empty-string delimiter.
#
#
#
#
#
#
#
#
#
#
#
#
#
# Any JVM args for the invocation of the JDBC client used to verify DB
# instances and to shut them down (SqlToolSprayer).
# Server-side System Properties should normally be set with system.*
# settings in the server/webserver.properties file.
# This example specifies the location of a private trust store for TLS
# encryption.
# For multiple args, put quotes around entire value.
# If you are starting just a TLS_encrypted Listener, you need to uncomment
# this so the init scripts uses TLS to connect.
# If using a private keystore, you also need to set "truststore" settings in
# the sqltool.rc file.
#CLIENT_JVMARGS=-Djavax.net.debug=ssl
# This sample value displays useful debugging information about TLS/SSL.
# Any JVM args for the server.
# For multiple args, put quotes around entire value.
#SERVER_JVMARGS=-Xmx512m
# You can set the "javax.net.debug" property on the server side here, in the
# same exact way as shown for the client side above.
281
HyperSQL on UNIX
Just run
/path/to/hsqldb
as root to see the arguments you may use. Notice that you can run
/path/to/hsqldb status
Tell your OS to run the init script upon system startup and shutdown. If you are using a UNIX variant that has
/etc/rc.conf or /etc/rc.conf.local (like BSD variants and Gentoo), you must set "hsqldb_enable"
to "YES" in either of those files. (Just run cd /etc; ls rc.conf rc.conf.local to see if you have
one of these files). For good UNIXes that use System V style init, you must set up hard links or soft links either
manually or with management tools (such as chkconfig or insserv) or Gui's (like run level editors).
This paragraph is for Mac OS X users only. If you followed the instructions above, your init script should reside at
/Library/StartupItems/hsqldb/hsqldb. Now copy the file StartupParameters.plist from
the directory src/org.hsqldb/sample of your HSQLDB distribution to the same directory as the init
script. As long as these two files reside in /Library/StartupItems/hsqldb, your init script is active
(for portability reasons, it doesn't check for a setting in /etc/hostconfig). You can run it as a Startup Item
by running
SystemStarter {start|stop|restart} Hsqldb
Hsqldb is the service name. See the man page for SystemStarter. To disable the init script, wipe out the /
Library/StartupItems/hsqldb directory. Hard to believe, but the Mac people tell me that during system
shutdown the Startup Items don't run at all. Therefore, if you don't want your data corrupted, make sure to run
"SystemStarter stop Hsqldb" before shutting down your Mac.
Follow the examples in the config file to add additional classes to the server JVM's classpath and to execute additional
classes in your JVM. (See the SERVER_ADDL_CLASSPATH and INVOC_ADDL_ARGS items).
282
HyperSQL on UNIX
sh -x path/to/hsqldb start
See the man page for sh if you don't know the difference between -v and -x.
If you want troubleshooting help, use the HSQLDB lists/forums. Make sure to include the revision number from your
hsqldb init script (it's towards the top in the line that starts like "# $Id:"), and the output of a run of
sh -x path/to/hsqldb start > /tmp/hstart.log 2>&1
Upgrading
This section is for users who are using our UNIX init script, and who are upgrading their HyperSQL installation.
Most users will not have customized the init script itself, and your customizations will all be encapsulated in the
init script configuration file. These users should just overwrite their init script with a new one from the HyperSQL
installation, and manually merge config file settings. First, just copy the file /sample/hsqldb.init over top of
of your init script (wherever it runs from). Then update your old config file according to the instructions in the new
config file template at sample/hsqldb.cfg. You will have to change very few settings. If you are upgrading from
a pre-2.0 installation to a post-2.0 installation, you will need to (1) add the setting URLIDS, as described above and
in the inline comments, and (2) replace variable HSQLDB_JAR_PATH with SQLTOOL_JAR_PATH which (if you
haven't guessed) should be set to the path to your sqltool.jar file.
Users who customized their init script will need to merge their customizations into the new init script.
283
284
Deployment Guide
285
Deployment Guide
Disk Space
With file: database, the engine uses the disk for storage of data and any change. For safely, the engine backs up the
data internally during operation. Spare space, at least equal to the size of the .data and .script file is needed. The .lobs
file is not backed up during operation as it is not necessary for safety.
286
Deployment Guide
A common error made by users in load-test simulations is to use a single client machine to open and close thousands
of connections to a HyperSQL server instance. The connection attempts will fail after a few thousand because of OS
restrictions on opening sockets and the delay that is built into the OS in closing them.
287
Deployment Guide
the .properties file of the database. The tests can then modify the database, but these modifications are not persisted
after the tests have completed.
Databases with "files_readonly=true" can be placed within the classpath and in a jar file. In this case, the connection
URL must use the res: protocol, which treats the database as a resource.
(Use ; instead of : to delimit classpath elements on Windows). The empty string separates your com.your.main.App
invocation from the org.hsqldb.server.
Specify the same in-process JDBC URL to your app and in the server.properties file. You can then connect
to the database from outside using a JDBC URL like jdbc:hsqldb:hsql://hostname, while connecting from
inside the application using something like jdbc:hsqldb:file:<filepath of database> .
This tactic can be used to run off-the-shelf server applications with an embedded HyperSQL Server, without doing
any coding.
288
Deployment Guide
MainInvoker can be used to run any number of Java class main method invocations in a single JVM. See the API
spec for MainInvoker for details on its usage.
289
Deployment Guide
taken from the JVM memory allocation, therefore there is no need to increase the -Xmx parameter of the JVM. If not
enough memory is available for the specified value, nio is not used.
Server Databases
Running databases in a HyperSQL server is the best overall method of access. As the JVM process is separate from
the application, this method is the most reliable as well as the most accessible method of running databases.
Upgrading Databases
Any database that is not produced with the release version of HyperSQL 2.0 must be upgraded to this version.
Open the database with the jar that created it and perform the SHUTDOWN statement as an SQL statement.
2.
3.
The first step is to guarantee there is no .log file for the database. When upgrading an application that has been deployed
on a large scale, it is sometimes not practical to perform the first step of this procedure (with the old jar). You can
ignore the first step but you may lose part of the database statements that are stored in the .log file. Therefore you
need to test with databases created with your application to make sure typical statements that are logged in the .log
file are compatible with the new version. Examples of known incompatible statements are some DDL statements used
for changing the data type or default values of column.
A note about SHUTDOWN modes. SHUTDOWN COMPACT is equivalent to SHUTDOWN SCRIPT plus opening
the database and then performing a simple SHUTDOWN.
After upgrading a database, you may want to change some of its settings. For example, the new SET FILES BACKUP
INCREMENT TRUE statement can improve the shutdown and checkpoint times of larger databases.
Once a database is upgraded to 2.0, it can no longer be used with previous versions of HyperSQL.
Procedure 16.2. Upgrade Using the SCRIPT Procedure for Very Old Versions
1.
290
Deployment Guide
2.
Issue the SCRIPT command, for example SCRIPT 'newversion.script' to create a script file containing
a copy of the database.
3.
4.
Copy the original *.properties file into newversion.properties in the same directory as
newversion.script
5.
Try to open the new database newversion using DatabaseManager of version 1.8.1.
6.
If there is any inconsistency in the data, the script line number is reported on the console and the opening process
is aborted. Edit and correct any problems in the newversion.script before attempting to open again. Use
the guidelines in the next section (Manual Changes to the .script File). Use a programming editor that is
capable of handling very large files and does not wrap long lines of text.
Deployment Guide
292
Deployment Guide
BY DEFAULT AS IDENTITY PRIMARY KEY, DAT VARCHAR(20)). This last form is the correct way of
defining both autoincrement and primary key in versions 1.8 and 2.0.
CREATE ALIAS is now obsolete. Use the new function definition syntax. The org.hsqldb.Library class
no longer exists. You should use the SQL form of the old library functions. For example, use LOG(x) rather than
the direct form, "org.hsqldb.Library.log"(x).
The names of some commands for changing database and session properties have changed. See the list of statements
in this chapter.
Computed columns in SELECT statements which did not have an alias: These columns had no ResultMetaData
label in version 1.8, but in version 2.0, the engine generates labels such as C1, C2.
The issue with the JDBC ResultSetMetaData methods, getColumnName(int
column) and
getColumnLabel(int column) has been clarified by the JDBC 4 specification. getColumName() returns
the underlying column name, while getColumnLabel() returns any specified or generated alias. HyperSQL
1.8 and 2.0 have a connection property, get_column_name, which defaults to true in version 2.0, but defaulted
to false in some releases of version 1.8.x. You have to explicitly specify this property as false if you want (nonstandard behaviour) getColumnName() to return the same value as getColumnLabel().
293
Deployment Guide
can use the snapshot jars where you would normally include a dependency to a release jar as a Maven artifact. The
HyperSQL Snapshot repository resides at http://hsqldb.org/repos/
Limitation of Classifiers
Classifiers are incompatible with real repository snapshots. Builders can only publish one jar variant per
product, and at this time our snapshot jars are always built debug-enabled with Java 6.
Where you insert the <repository> element depends on whether you want the definition to be personal, shared, or
project-specific, so see the Maven documentation about that. But you can paste this element verbatim:
If you want to use an ivy.xml file with a Gradle build, you will need use the Ivyxml Gradle Plugin [https://
github.com/unsaved/gradle-ivyxml-plugin]. It just takes a few links of code in your build.gradle file to
hook in ivyxml. See the Ivyxml documentation [https://github.com/unsaved/gradle-ivyxml-plugin/raw/master/
README.txt] to see exactly how.
294
Deployment Guide
Range Versioning
Keeping up-to-date with Range Dependencies
I give no example here of specifying a classifier in ivy.xml because I have so far failed to get that to succeed.
Classifiers in in ivy.xml are supported if using Gradle, as covered below.
295
Deployment Guide
If you want to use an ivy.xml file with a Gradle build, you will need use the Ivyxml Gradle Plugin [https://
github.com/unsaved/gradle-ivyxml-plugin]. It just takes a few links of code in your build.gradle file to
hook in ivyxml. See the Ivyxml documentation [https://github.com/unsaved/gradle-ivyxml-plugin/raw/master/
README.txt] to see exactly how.
296
ABS ALL ALLOCATE ALTER AND ANY ARE ARRAY AS ASENSITIVE ASYMMETRIC AT
ATOMIC AUTHORIZATION AVG
BEGIN BETWEEN BIGINT BINARY BLOB BOOLEAN BOTH BY
CALL CALLED CARDINALITY CASCADED CASE CAST CEIL CEILING CHAR CHAR_LENGTH
CHARACTER CHARACTER_LENGTH CHECK CLOB CLOSE COALESCE COLLATE COLLECT
COLUMN COMMIT COMPARABLE CONDITION CONNECT CONSTRAINT CONVERT CORR
CORRESPONDING COUNT COVAR_POP COVAR_SAMP CREATE CROSS CUBE CUME_DIST
CURRENT CURRENT_CATALOG CURRENT_DATE CURRENT_DEFAULT_TRANSFORM_GROUP
CURRENT_PATH CURRENT_ROLE CURRENT_SCHEMA CURRENT_TIME CURRENT_TIMESTAMP
CURRENT_TRANSFORM_GROUP_FOR_TYPE CURRENT_USER CURSOR CYCLE
DATE DAY DEALLOCATE DEC DECIMAL DECLARE DEFAULT DELETE DENSE_RANK
DEREF DESCRIBE DETERMINISTIC DISCONNECT DISTINCT DO DOUBLE DROP DYNAMIC
EACH ELEMENT ELSE ELSEIF END END_EXEC ESCAPE EVERY EXCEPT EXEC EXECUTE
EXISTS EXIT EXP EXTERNAL EXTRACT
FALSE FETCH FILTER FIRST_VALUE FLOAT FLOOR FOR FOREIGN FREE FROM FULL
FUNCTION FUSION
GET GLOBAL GRANT GROUP GROUPING
HANDLER HAVING HOLD HOUR
IDENTITY IN INDICATOR INNER INOUT INSENSITIVE INSERT INT INTEGER INTERSECT
INTERSECTION INTERVAL INTO IS ITERATE
JOIN
LAG
LANGUAGE LARGE LAST_VALUE LATERAL LEAD LEADING LEAVE LEFT LIKE
LIKE_REGEX LN LOCAL LOCALTIME LOCALTIMESTAMP LOOP LOWER
297
Lists of Keywords
MATCH MAX MAX_CARDINALITY MEMBER MERGE METHOD MIN MINUTE MOD MODIFIES
MODULE MONTH MULTISET
NATIONAL NATURAL NCHAR NCLOB NEW NO NONE NORMALIZE NOT NTH_VALUE
NTILE NULL NULLIF NUMERIC
OCCURRENCES_REGEX OCTET_LENGTH OF OFFSET OLD ON ONLY OPEN OR ORDER
OUT OUTER OVER OVERLAPS OVERLAY
PARAMETER PARTITION PERCENT_RANK PERCENTILE_CONT PERCENTILE_DISC POSITION
POSITION_REGEX POWER PRECISION PREPARE PRIMARY PROCEDURE
RANGE RANK READS REAL RECURSIVE REF REFERENCES REFERENCING REGR_AVGX
REGR_AVGY REGR_COUNT REGR_INTERCEPT REGR_R2 REGR_SLOPE REGR_SXX REGR_SXY
REGR_SYY RELEASE REPEAT RESIGNAL RESULT RETURN RETURNS REVOKE RIGHT
ROLLBACK ROLLUP ROW ROW_NUMBER ROWS
SAVEPOINT SCOPE SCROLL SEARCH SECOND SELECT SENSITIVE SESSION_USER SET
SIGNAL SIMILAR SMALLINT SOME SPECIFIC SPECIFICTYPE SQL SQLEXCEPTION SQLSTATE
SQLWARNING SQRT STACKED START STATIC STDDEV_POP STDDEV_SAMP SUBMULTISET
SUBSTRING SUBSTRING_REGEX SUM SYMMETRIC SYSTEM SYSTEM_USER
TABLE TABLESAMPLE THEN TIME TIMESTAMP TIMEZONE_HOUR TIMEZONE_MINUTE
TO TRAILING TRANSLATE TRANSLATE_REGEX TRANSLATION TREAT TRIGGER TRIM
TRIM_ARRAY TRUE TRUNCATE
UESCAPE UNDO UNION UNIQUE UNKNOWN UNNEST UNTIL UPDATE UPPER USER USING
VALUE VALUES VAR_POP VAR_SAMP VARBINARY VARCHAR VARYING
WHEN WHENEVER WHERE WIDTH_BUCKET WINDOW WITH WITHIN WITHOUT WHILE
YEAR
298
Lists of Keywords
299
Purpose
From version 2.0, the supplied hsqldb.jar file is built with Java 1.6. If you want to run with a 1.5 JVM, or if you
want to use an alternative jar (hsqldb-min.jar, etc.) you must build the desired jar with a Java SDK.
The Gradle task / Ant target explainjars reports the versions of Java and Ant actually used.
Rare Gotcha
Depending on your operating system, version, and how you installed your JDK, Gradle may not be
able to find the JDK. Gradle will inform you if this happens. The easiest way to fix this problem is to
set environmental variable JAVA_HOME to the root directory where your Java SDK is installed. (See
previous note for justification). So as not to get bogged down in the details here, if you don't know how
to set an environmental variable, I ask you to utilize a search engine.
300
If you do use and enjoy Gradle, then I urge you to make the product better by registering a free account for
yourself at the Gradle Jira site [http://issues.gradle.org/] and vote for critical usability issues like GRADLE-427
[http://issues.gradle.org/browse/GRADLE-427], GRADLE-1855 [http://issues.gradle.org/browse/GRADLE-1855],
GRADLE-1870
[http://issues.gradle.org/browse/GRADLE-1870], GRADLE-1871
[http://issues.gradle.org/
browse/GRADLE-1871], to help to improve the product.
Start up Windows explorer. Depending on your Windows version, it will be in the Start Menu, or in the menu
you get when you right-click Start.
2.
Navigate Windows Explorer to the build directory within your HyperSQL installation.
3.
Find an icon or line (depending on your Windows Explorer view) for the file gradle-gui.cmd. If there is no
listing for gradle-gui.cmd, but two listings for gradle-gui, then you want the one signified by text, icon,
or mouse-over tooltip, as a batch or CMD file. Double-click this item.
From Eclipse, use pulldown menu Run / External Tools / External Tools Configurations....
2.
Right-click on Program in the left navigator Right-click Project in the left navigator panel and select New.
(Depending on the state of your workspace, instead of New in the context-sensitive menu, there may be a
New_configuration or similar item nested under Program, in which case you should select that).
3.
To the right, change the value in the Name: field to HSQLDB Gradle (or whatever name you want for this
launcher config (this Gradle launcher is only for your HSQLDB project).
4.
5.
For the Location: field, use the Browse Workspace... button to navigate to and select the gradle-gui.cmd
(Windows) or gradle-gui (other) file in the build directory of your HyperSQL project.
301
Click the Run button. The Gradle Gui should run. (If you just Apply and Close here instead of Run, the new
Gradle launch item will not be added to the pulldown and toolbar menus).
After doing the Eclipse setup, you can use pulldown menu Run / External Tools or the equivalent tool bar button
button to launch the Gradle Gui.
302
Get a command-line shell. Windows users can use either Start/Run... or Start/Start Search, and enter "cmd". Nonwindows users will know how to get a shell.
2.
In the shell, cd to the build directory under the root directory where you extracted or installed HyperSQL
to. (Operating system search or find functions can be used if you can't find it quickly by poking around on the
command line or with Windows Explorer, etc.).
3.
Windows users can ignore this step. UNIX shell users should ensure that the current directory (.) is in their search
path, or prefix their gradlew or gradle-gui command in the next step with ./ (e.g., like ./gradlew).
303
4.
In the shell, run either gradle-gui for a graphical build; or gradlew for a text-based build.
The gradle-gui file is our own wrapper script for gradlew --gui. Be aware that both gradle-gui and
gradlew --gui suffer from the limitation that the --gui switch is mutually exclusive with most or all other
arguments (including tasks). I have registered GRADLE bugs 1861 and 1863 about this.
Using Gradle
Using Text-based Gradle
If you ran just gradlew or gradlew.bat, then you will be presented with simple instructions for how to do
everything that you want to do. Basically, you will run the same gradlew or gradle.bat command repeatedly,
with different switches and arguments.
Note
Gradle's -v switch reports version details more directly than the explainjars task does, from the
operating system version to the Groovy version (the language interpreter used for Gradle instructions).
304
It takes the Gradle gui a while to start up, because, similar to an IDE, it is generating a list of details about
available tasks.
2.
In the main window, in the top panel, with the Task Tree tab selected, you have the list of public tasks, sorted
alphabetically. Down bottom is displayed the output of the last task(s) execution. (After startup it will show the
output of the task tasks).
305
3.
Scroll to the help task and click it once to select it, then click the green Execute toolbar button above. (You could
also have double-clicked the item, but you can use the selection procedure to pick multiple tasks with Control
or Shift keys to execute multiple tasks in a single run-- and the tasks will execute in the same order that you
had selected them).
4.
Scroll through and read the output of the help task in the bottom panel. Where this help screen speaks about
verbosity switches, you can accomplish the same thing by using the Setup tab. Whenever Gradle output (in
the bottom panel) talks about running gradlew <sometask>..., you can execute the specified task(s) by
selecting and executing them like we just did.
Obtaining Ant
Ant is a part of the Jakarta/Apache Project.
Home of the Apache Ant project [http://ant.apache.org]
The Installing Ant [http://ant.apache.org/manual/install.html#installing] page of the Ant Manual
ant.apache.org/manual]. Follow the directions for your platform.
[http://
This displays the available ant targets, which you can supply as command line arguments to ant. These include
hsqldb
explainjars
Lists all targets which build jar files, with an explanation of the purposes of the different jars.
clean
clean-all
javadoc
to build javadoc
hsqldbmain
to build a smaller jar for HSQLDB that does not contain utilities
hsqljdbc
to build an extremely small jar containing only the client-side JDBC driver (can connect only to
a HyperSQL Server).
306
hsqldbmin
to build a small jar that supports in-process catalogs, but neither running nor connecting to
HyperSQL Servers.
sqltool
...
Many more targets are available. Run ant -p and ant explainjars.
HSQLDB can be built in any combination of two JRE (Java Runtime Environment) versions and many jar file sizes.
A jar built with an older JRE is compatible for use with a newer JRE (you can compile with Java 1.5 and run with
1.6). But the newer JDBC capabilities of the JRE will be not be available.
The client jar (hsqljdbc.jar) contains only the HSQLDB JDBC Driver client. The smallest engine jar
(hsqldbmin.jar) contains the engine and the HSQLDB JDBC Driver client. The default size (hsqldb.jar) also
contains server mode support and the utilities. The largest size (hsqldbtest.jar)includes some test classes as
well. Before building the hsqldbtest.jar package, you should download the junit jar from http://www.junit.org
and put it in the /lib directory, alongside servlet.jar, which is included in the .zip package.
If you want your code built for high performance, as opposed to debugging (in the same way that we make our
production distributions), make a file named build.properties in your build directory with the contents
build.debug: false
The resulting Java binaries will be faster and smaller, at the cost of exception stack traces not identifying source code
locations (which can be extremely useful for debugging).
After installing Ant on your system use the following command from the /build directory. Just run ant
explainjars for a concise list of all available jar files.
ant explainjars
The command displays a list of different options for building different sizes of the HSQLDB Jar. The default is built
using:
Example B.1. Buiding the standard Hsqldb jar file with Ant
ant hsqldb
The Ant method always builds a jar with the JDK that is used by Ant and specified in its JAVA_HOME environment
variable.
307
you should first run the Gradle task (or Ant target) before compiling and remove from the source directories a few
source files that are specific to Java 6 (these are listed in the build.xml file).
Hsqldb CodeSwitcher
CodeSwitcher is a tool to manage different version of Java source code. It allows to compile HyperSQL for different
JDKs. It is something like a precompiler in C but it works directly on the source code and does not create intermediate
output or extra files.
CodeSwitcher is used internally in the Ant build. You do not have to invoke it separately to compile HyperSQL.
CodeSwitcher reads the source code of a file, removes comments where appropriate and comments out the blocks
that are not used for a particular version of the file. This operation is done for all files of a defined directory, and all
subdirectories.
The '.' means the program works on the current directory (all subdirectories are processed recursively). -JAVA2 means
the code labelled with JAVA2 must be switched off.
308
pProperties.save(out,"hsqldb database");
//#endif
...
For detailed information on the command line options run java org.hsqldb.util.CodeSwitcher. Usage
examples can be found in the build.xml file in the /build directory.
Building Documentation
The JavaDoc can be built simply by invoking the javadoc task/target with Gradle or Ant.
The two Guides (the one you are reading now plus the Utilities user guide) are in DocBook XML source format. To
rebuild to PDF or one of the HTML output formats from the XML source, run the Gradle target gen-docs (or the
Ant target gen-docs). Instructions will be displayed. In particular
Obtain the HyperSQL documentation source. We no longer include our Guide source files in our main distribution
zip file, in order to keep it small. You may want to build from the trunk branch or the latest release tag. You can
download a static snapshot tarball from http://hsqldb.svn.sourceforge.net/viewvc/hsqldb/base/trunk/ or under http://
hsqldb.svn.sourceforge.net/viewvc/hsqldb/base/tags/ , or you can export a snapshot or check out a work area using
a Subversion client.
You must locally install the DocBook set of image files, which are available for download from Sourceforge. The
gen-docs task/target will tell you of a Gradle task that you can use to download and install them automatically.
This Gradle task, installDbImages, will tell you how to edit a properties text file to tell it what directory to
install the files into. (Command-line, as opposed to GUI, builders, can use the Gradle -P switch to set the property,
instead of editing, if they prefer).
You can optionally install the entire DocBook style sheets (instead of just the DocBook images within it), character
entity definitions, and RNG schema file, to speed up doc build times and minimize dependency of future builds
upon network or Internet. An intermediate approach would be to install these resources onto an HTTP server or
shared network drive of your own. See the comments at the top of the file build.xml in the HyperSQL build
directory about where to obtain these things and how to hook them in. The same Gradle task installDbImages
explained above can download and install the entire stylesheet bundle (this option is offered the first time that you
run the installDbImages task).
Tip
If running Gradle, you probably want to turn logging up to level info for generation and validation tasks,
because the default warn/lifecycle level doesn't give much feedback.
The task/target validate-docs is also very useful to DocBook builders.
The documentation license does not allow you to post modifications to our guides, but you can modify them for internal
use by your organization, and you can use our DocBook system to write new DocBook documents related or unrelated
to HyperSQL. To create new DocBook documents, create a subdirectory off of doc-src for each new document,
with the main DocBook source file within having same name as the directory plus .xml. See the peer directory utilguide or guide as an example. If you use the high-level tasks/target gen-docs or validate-docs, then copy
and paste to add new stanzas to these targets in file build.xml.
Editors of DocBook documents (see previous paragraph for motive) may find it useful to have a standalone XML
validator so you can do your primary editing without involvement of the build system. Use the Gradle target
standaloneValidation for this. It will tell you how to set a build property to tell it where to install the validator,
and will give instructions on how to use it.
309
There are several properties that can be used to dramatically decrease run times for partial doc builds. Read about these
properties in comment at the top of the file build-docbook.xml in the build directory.
validation.skip
html.skip
chunk.skip
fo.skip
pdf.skip
doc.name
doc.target
See the file doc-src/readme-docauthors.txt for details about our DocBook build system (though as I write
this it is somewhat out of date).
310
The only current limitation is that OpenOffice only works with the PUBLIC schema. This limitation will hopefully
disappear in the future versions of OOo.
There wil hopefuly be a version HSQLDB 2.x jar in the future versions of OpenOffice.
311
Note
If you are reading this document with a standalone PDF reader, only the http://hsqldb.org/doc/2.0/... links
will function.
Local: ../verbatim/src/org/hsqldb/test/TestBase.java
http://hsqldb.org/doc/2.0/verbatim/src/org/hsqldb/test/TestBase.java
Local: ../verbatim/src/org/hsqldb/Trigger.java
http://hsqldb.org/doc/2.0/verbatim/src/org/hsqldb/Trigger.java
Local: ../verbatim/src/org/hsqldb/sample/TriggerSample.java
http://hsqldb.org/doc/2.0/verbatim/src/org/hsqldb/test/sample/TriggerSample.java
Local: ../verbatim/src/org/hsqldb/util/MainInvoker.java
http://hsqldb.org/doc/2.0/verbatim/src/org/hsqldb/util/MainInvoker.java
Local: ../verbatim/sample/hsqldb.cfg
http://hsqldb.org/doc/2.0/verbatim/sample/hsqldb.cfg
Local: ../verbatim/sample/acl.txt
http://hsqldb.org/doc/2.0/verbatim/sample/acl.txt
Local: ../verbatim/sample/server.properties
http://hsqldb.org/doc/2.0/verbatim/sample/server.properties
Local: ../verbatim/sample/sqltool.rc
http://hsqldb.org/doc/2.0/verbatim/sample/sqltool.rc
Local: ../verbatim/sample/hsqldb.init
http://hsqldb.org/doc/2.0/verbatim/sample/hsqldb.init
313
SQL Index
Symbols
_SYSTEM ROLE, 95
A
ABS function, 189
ACOS function, 189
ACTION_ID function, 209
ADD_MONTHS function, 197
ADD COLUMN, 66
add column identity generator, 67
ADD CONSTRAINT, 66
ADD DOMAIN CONSTRAINT, 70
ADMINISTRABLE_ROLE_AUTHORIZATIONS, 81
aggregate function, 124
ALL and ANY predicates, 120
ALTER COLUMN, 67
alter column identity generator, 68
alter column nullability, 68
ALTER DOMAIN, 69
ALTER INDEX, 77
ALTER routine, 73
ALTER SEQUENCE, 75
ALTER SESSION, 39
ALTER TABLE, 66
ALTER USER ... SET INITIAL SCHEMA, 98
ALTER USER ... SET LOCAL, 98
ALTER USER ... SET PASSWORD, 97
ALTER view, 69
APPLICABLE_ROLES, 81
ASCII function, 183
ASIN function, 189
ASSERTIONS, 81
ATAN2 function, 189
ATAN function, 189
AUTHORIZATION IDENTIFIER, 95
AUTHORIZATIONS, 82
B
BACKUP DATABASE, 219
BETWEEN predicate, 119
binary literal, 108
BINARY types, 15
BIT_LENGTH function, 183
BITAND function, 189
BITANDNOT function, 190
bit literal, 108
BITNOT function, 190
BITOR function, 190
BIT types, 16
BITXOR function, 190
C
CARDINALITY function, 203
CASCADE or RESTRICT, 55
case expression, 113
CASEWHEN function, 204
CASE WHEN in routines, 160
CAST, 114
CEIL function, 189
CHANGE_AUTHORIZATION, 95
CHAR_LENGTH, 183
CHARACTER_LENGTH, 183
CHARACTER_SETS, 82
character literal, 107
CHARACTER types, 14
character value function, 116
CHAR function, 183
CHECK_CONSTRAINT_ROUTINE_USAGE, 82
CHECK_CONSTRAINTS, 82
CHECK constraint, 63
CHECKPOINT, 220
COALESCE expression, 113
COALESCE function, 204
COLLATE, 126
COLLATIONS, 82
COLUMN_COLUMN_USAGE, 82
COLUMN_DOMAIN_USAGE, 82
COLUMN_PRIVILEGES, 82
COLUMN_UDT_USAGE, 82
column definition, 59
column name list, 132
column reference, 111
COLUMNS, 82
COMMENT, 56
COMMIT, 41
comparison predicate, 118
CONCAT_WS function, 184
CONCAT function, 183
CONSTRAINT, 126
CONSTRAINT_COLUMN_USAGE, 82
CONSTRAINT_TABLE_USAGE, 82
CONSTRAINT (table constraint), 61
CONSTRAINT name and characteristics, 61
CONTAINS predicate, 122
contextually typed value specification, 111
CONVERT function, 204
COS function, 190
COT function, 190
CREATE_SCHEMA ROLE, 95
CREATE AGGREGATE FUNCTION, 171
314
SQL Index
CREATE ASSERTION, 79
CREATE CAST, 77
CREATE CHARACTER SET, 78
CREATE COLLATION, 78
CREATE DOMAIN, 69
CREATE FUNCTION, 148
CREATE INDEX, 77
CREATE PROCEDURE, 148
CREATE ROLE, 98
CREATE SCHEMA, 57
CREATE SEQUENCE, 74
CREATE SYNONYM, 76
CREATE TABLE, 58
CREATE TRANSLATION, 79
CREATE TRIGGER, 70, 179
CREATE TYPE, 77
CREATE USER, 97
CREATE VIEW, 68
CROSS JOIN, 133
CRYPT_KEY function, 206
CURDATE function, 194
CURRENT_CATALOG function, 207
CURRENT_DATE function, 194
CURRENT_ROLE function, 207
CURRENT_SCHEMA function, 207
CURRENT_TIME function, 194
CURRENT_TIMESTAMP function, 194
CURRENT_USER function, 207
CURRENT VALUE FOR, 115
CURTIME function, 195
D
DATA_TYPE_PRIVILEGES, 82
DATABASE_ISOLATION_LEVEL function, 208
DATABASE_NAME function, 207
DATABASE_TIMEZONE function, 194
DATABASE_VERSION function, 207
DATABASE function, 207
DATE_ADD function, 199
DATE_SUB function, 199
DATEADD function, 199
DATEDIFF function, 199
datetime and interval literal, 109
Datetime Operations, 20
DATETIME types, 19
datetime value expression, 116
datetime value function, 116
DAYNAME function, 195
DAYOFMONTH function, 195
DAYOFWEEK function, 195
DAYOFYEAR function, 195
DAYS function datetime, 195
DBA ROLE, 95
E
ELEMENT_TYPES, 83
ENABLED_ROLES, 83
EQUALS predicate, 122
EXISTS predicate, 121
EXP function, 190
expression, 113
external authentication, 218
EXTERNAL NAME, 150
EXTRACT function, 197
F
FLOOR function, 190
FOREIGN KEY constraint, 62
315
SQL Index
G
generated column specification, 59
GET DIAGNOSTICS, 146
GRANTED BY, 99
GRANT privilege, 99
GRANT role, 100
GREATEST function, 205
GROUP BY, 136
GROUPING OPERATIONS, 136
H
HEXTORAW function, 184
HOUR function, 196
I
identifier chain, 110
identifier definition, 54
IDENTITY function, 206
IF EXISTS, 55
IF NOT EXISTS, 56
IFNULL function, 205
IF STATEMENT, 161
INFORMATION_SCHEMA_CATALOG_NAME, 83
IN predicate, 119
INSERT function, 184
INSERT INTO, 142
INSTR function, 184
interval absolute value function, 117
interval term, 116
INTERVAL types, 22
IS_AUTOCOMMIT function, 208
IS_READONLY_DATABASE_FILES function, 208
IS_READONLY_DATABASE function, 208
IS_READONLY_SESSION function, 208
IS DISTINCT predicate, 123
ISNULL function, 205
IS NULL predicate, 120
ISOLATION_LEVEL function, 208
J
JOIN USING, 133
JOIN with condition, 133
K
KEY_COLUMN_USAGE, 83
L
LANGUAGE, 150
LAST_DAY function, 198
LATERAL, 131
M
MATCH predicate, 121
MAX_CARDINALITY function, 203
MERGE INTO, 144
MINUTE function, 196
MOD function, 191
MONTH function, 196
MONTHNAME function, 196
MONTHS_BETWEEN function, 198
N
name resolution, 135
naming in joined table, 135
naming in select list, 135
NATURAL JOIN, 133
NEXT VALUE FOR, 114
NOW function, 194
NULLIF function, 206
NULL INPUT, 151
numeric literal, 108
NUMERIC types, 12
numeric value expression, 115
numeric value function, 115
NVL2 function, 206
NVL function, 206
O
OCTET_LENGTH function, 185
ON UPDATE clause, 61
OTHER type, 17
OUTER JOIN, 134
OVERLAPS predicate, 122
OVERLAY function, 186
316
SQL Index
P
PARAMETERS, 83
password complexity, 218
PATH, 126
PI function, 191
POSITION_ARRAY function, 203
POSITION function, 186
POWER function, 191
PRECEDES predicate, 123
PRIMARY KEY constraint, 61
PUBLIC ROLE, 95
Q
QUARTER function, 196
R
RADIANS function, 191
RAND function, 191
RAWTOHEX function, 186
REFERENTIAL_CONSTRAINTS, 83
REGEXP_MATCHES function, 186
REGEXP_REPLACE function, 186
REGEXP_SUBSTRING_ARRAY function, 187
REGEXP_SUBSTRING function, 186
RELEASE SAVEPOINT, 41
RENAME, 56
REPEAT ... UNTIL loop in routines, 159
REPEAT function, 187
REPLACE function, 187
RESIGNAL STATEMENT, 163
RETURN, 162
RETURNS, 148
REVERSE function, 187
REVOKE, 100
REVOKE ROLE, 100
RIGHT function, 187
ROLE_AUTHORIZATION_DESCRIPTORS, 83
ROLE_COLUMN_GRANTS, 83
ROLE_ROUTINE_GRANTS, 83
ROLE_TABLE_GRANTS, 83
ROLE_UDT_GRANTS, 83
ROLE_USAGE_GRANTS, 83
ROLLBACK, 41
ROLLBACK TO SAVEPOINT, 41
ROUND function datetime, 200
ROUND number function, 192
ROUTINE_COLUMN_USAGE, 83
ROUTINE_JAR_USAGE, 83
ROUTINE_PRIVILEGES, 84
ROUTINE_ROUTINE_USAGE, 84
ROUTINE_SEQUENCE_USAGE, 84
ROUTINE_TABLE_USAGE, 84
routine body, 149
S
SAVEPOINT, 41
SAVEPOINT LEVEL, 151
schema routine, 72
SCHEMATA, 84
SCRIPT, 220
search condition, 126
SECOND function, 196
SECONDS_SINCE_MIDNIGHT function, 196
SELECT, 128
SELECT : SINGLE ROW, 158
SEQUENCE_ARRAY function, 203
SEQUENCES, 84
SESSION_ID function, 208
SESSION_ISOLATION_LEVEL function, 208
SESSION_TIMEZONE function, 193
SESSION_USER function, 207
SESSIONTIMEZONE function, 193
SET AUTOCOMMIT, 39
SET CATALOG, 43
set clause in UPDATE and MERGE statements, 144
SET CONSTRAINTS, 40
SET DATABASE AUTHENTICATION FUNCTION,
237
SET DATABASE COLLATION, 221
SET DATABASE DEFAULT INITIAL SCHEMA, 98
SET DATABASE DEFAULT ISOLATION LEVEL,
224
SET DATABASE DEFAULT RESULT MEMORY
ROWS, 221
SET DATABASE DEFAULT TABLE TYPE, 222
SET DATABASE EVENT LOG LEVEL, 222
SET DATABASE GC, 223
SET DATABASE LIVE OBJECT, 230
SET DATABASE PASSWORD CHECK FUNCTION,
236
SET DATABASE SQL AVG SCALE, 228
SET DATABASE SQL CHARACTER LITERAL, 227
SET DATABASE SQL CONCAT NULLS, 227
SET DATABASE SQL CONVERT TRUNCATE, 228
SET DATABASE SQL DOUBLE NAN, 229
SET DATABASE SQL IGNORECASE, 229
SET DATABASE SQL NAMES, 224
SET DATABASE SQL NULLS FIRST, 229
SET DATABASE SQL NULLS ORDER, 229
317
SQL Index
SHUTDOWN, 219
SIGNAL STATEMENT, 162
SIGN function, 192
SIN function, 192
SORT_ARRAY function, 203
sort specification list, 140
SOUNDEX function, 187
SPACE function, 188
SPECIFIC, 56
SPECIFIC NAME, 150
SQL_FEATURES, 84
SQL_IMPLEMENTATION_INFO, 84
SQL_PACKAGES, 84
SQL_PARTS, 84
SQL_SIZING, 84
SQL_SIZING_PROFILES, 84
SQL DATA access characteristic, 151
SQL parameter reference, 111
SQL procedure statement, 76
SQL routine body, 149
SQRT function, 192
START TRANSACTION, 39
string value expression, 116
SUBSTR function, 188
SUBSTRING function, 188
SUCCEEDS predicate, 123
SYSDATE function, 195
SYSTEM_BESTROWIDENTIFIER, 86
SYSTEM_CACHEINFO, 86
SYSTEM_COLUMN_SEQUENCE_USAGE, 86
SYSTEM_COLUMNS, 86
SYSTEM_COMMENTS, 86
SYSTEM_CONNECTION_PROPERTIES, 86
SYSTEM_CROSSREFERENCE, 86
SYSTEM_INDEXINFO, 86
SYSTEM_PRIMARYKEYS, 86
SYSTEM_PROCEDURECOLUMNS, 86
SYSTEM_PROCEDURES, 86
SYSTEM_PROPERTIES, 86
SYSTEM_SCHEMAS, 87
SYSTEM_SEQUENCES, 87
SYSTEM_SESSIONINFO, 87
SYSTEM_SESSIONS, 87
SYSTEM_TABLES, 87
SYSTEM_TABLESTATS, 87
SYSTEM_TABLETYPES, 87
SYSTEM_TEXTTABLES, 87
SYSTEM_TYPEINFO, 87
SYSTEM_UDTS, 87
SYSTEM_USER function, 207
SYSTEM_USERS, 87
SYSTEM_VERSIONCOLUMNS, 87
SYSTIMESTAMP function, 195
SYS USER, 95
318
SQL Index
T
TABLE_CONSTRAINTS, 84
TABLE_PRIVILEGES, 85
Table Function Derived Table, 132
TABLES, 84
TAN function, 192
TIMESTAMP_WITH_ZONE function, 201
TIMESTAMPADD function, 198
TIMESTAMPDIFF function, 198
TIMESTAMP function, 200
Time Zone, 19
TIMEZONE function, 193
TO_CHAR function, 201
TO_DATE function, 201
TO_NUMBER function, 192
TO_TIMESTAMP function, 201
TODAY function, 195
TRANSACTION_CONTROL function, 209
TRANSACTION_ID function, 209
TRANSACTION_SIZE function, 209
transaction characteristics, 40
TRANSLATE function, 188
TRANSLATIONS, 85
TRIGGER_COLUMN_USAGE, 85
TRIGGER_ROUTINE_USAGE, 85
TRIGGER_SEQUENCE_USAGE, 85
TRIGGER_TABLE_USAGE, 85
TRIGGERED_UPDATE_COLUMNS, 85
TRIGGERED SQL STATEMENT, 180
TRIGGER EXECUTION ORDER, 180
TRIGGERS, 85
TRIM_ARRAY function, 203
TRIM function, 188
TRUNCATE function, 192
TRUNCATE SCHEMA, 142
TRUNCATE TABLE, 141
TRUNC function datetime, 200
TRUNC function numeric, 192
V
value expression, 115
value expression primary, 111
value specification, 112
VIEW_COLUMN_USAGE, 85
VIEW_ROUTINE_USAGE, 85
VIEW_TABLE_USAGE, 85
VIEWS, 85
W
WEEK function, 197
WHILE loop in routines, 159
WIDTH_BUCKET function, 193
Y
YEAR function, 197
U
UCASE function, 188
unicode escape elements, 106
UNION JOIN, 133
UNIQUE constraint, 61
UNIQUE predicate, 121
UNIX_MILLIS function, 196
UNIX_TIMESTAMP function, 197
UNNEST, 131
UPDATE, 143
UPPER function, 189
USAGE_PRIVILEGES, 85
USER_DEFINED_TYPES, 85
USER function, 207
319
General Index
Symbols
_SYSTEM ROLE, 95
A
ABS function, 189
ACL, 268
ACOS function, 189
ACTION_ID function, 209
ADD_MONTHS function, 197
ADD COLUMN, 66
add column identity generator, 67
ADD CONSTRAINT, 66
ADD DOMAIN CONSTRAINT, 70
ADMINISTRABLE_ROLE_AUTHORIZATIONS, 81
aggregate function, 124
ALL and ANY predicates, 120
ALTER COLUMN, 67
alter column identity generator, 68
alter column nullability, 68
ALTER DOMAIN, 69
ALTER INDEX, 77
ALTER routine, 73
ALTER SEQUENCE, 75
ALTER SESSION, 39
ALTER TABLE, 66
ALTER USER ... SET INITIAL SCHEMA, 98
ALTER USER ... SET LOCAL, 98
ALTER USER ... SET PASSWORD, 97
ALTER view, 69
Ant, 306
APPLICABLE_ROLES, 81
ASCII function, 183
ASIN function, 189
ASSERTIONS, 81
ATAN2 function, 189
ATAN function, 189
AUTHORIZATION IDENTIFIER, 95
AUTHORIZATIONS, 82
B
backup, 214
BACKUP DATABASE, 219
BETWEEN predicate, 119
binary literal, 108
BINARY types, 15
BIT_LENGTH function, 183
BITAND function, 189
BITANDNOT function, 190
bit literal, 108
BITNOT function, 190
C
CARDINALITY function, 203
CASCADE or RESTRICT, 55
case expression, 113
CASEWHEN function, 204
CASE WHEN in routines, 160
CAST, 114
CEIL function, 189
CHANGE_AUTHORIZATION, 95
CHAR_LENGTH, 183
CHARACTER_LENGTH, 183
CHARACTER_SETS, 82
character literal, 107
CHARACTER types, 14
character value function, 116
CHAR function, 183
CHECK_CONSTRAINT_ROUTINE_USAGE, 82
CHECK_CONSTRAINTS, 82
CHECK constraint, 63
CHECKPOINT, 220
COALESCE expression, 113
COALESCE function, 204
COLLATE, 126
COLLATIONS, 82
COLUMN_COLUMN_USAGE, 82
COLUMN_DOMAIN_USAGE, 82
COLUMN_PRIVILEGES, 82
COLUMN_UDT_USAGE, 82
column definition, 59
column name list, 132
column reference, 111
COLUMNS, 82
COMMENT, 56
COMMIT, 41
comparison predicate, 118
CONCAT_WS function, 184
CONCAT function, 183
CONSTRAINT, 126
CONSTRAINT_COLUMN_USAGE, 82
CONSTRAINT_TABLE_USAGE, 82
CONSTRAINT (table constraint), 61
CONSTRAINT name and characteristics, 61
CONTAINS predicate, 122
contextually typed value specification, 111
CONVERT function, 204
COS function, 190
320
General Index
D
DATA_TYPE_PRIVILEGES, 82
DATABASE_ISOLATION_LEVEL function, 208
DATABASE_NAME function, 207
DATABASE_TIMEZONE function, 194
DATABASE_VERSION function, 207
DATABASE function, 207
DATE_ADD function, 199
DATE_SUB function, 199
DATEADD function, 199
DATEDIFF function, 199
datetime and interval literal, 109
Datetime Operations, 20
DATETIME types, 19
datetime value expression, 116
datetime value function, 116
DAYNAME function, 195
DAYOFMONTH function, 195
DAYOFWEEK function, 195
E
ELEMENT_TYPES, 83
ENABLED_ROLES, 83
EQUALS predicate, 122
EXISTS predicate, 121
EXP function, 190
expression, 113
external authentication, 218
EXTERNAL NAME, 150
EXTRACT function, 197
321
General Index
LANGUAGE, 150
LAST_DAY function, 198
LATERAL, 131
LCASE function, 185
LEAST function, 205
LEFT function, 185
LENGTH function, 185
LIKE predicate, 120
LN function, 191
LOAD_FILE function, 205
LOB_ID function, 209
LOCALTIME function, 194
LOCALTIMESTAMP function, 194
LOCATE function, 185
LOCK TABLE, 40
LOG10 function, 191
LOG function, 191
LOOP in routines, 159
LOWER function, 185
LPAD function, 185
LTRIM function, 185
G
generated column specification, 59
GET DIAGNOSTICS, 146
Gradle, 300
GRANTED BY, 99
GRANT privilege, 99
GRANT role, 100
GREATEST function, 205
GROUP BY, 136
GROUPING OPERATIONS, 136
H
HEXTORAW function, 184
HOUR function, 196
I
identifier chain, 110
identifier definition, 54
IDENTITY function, 206
IF EXISTS, 55
IF NOT EXISTS, 56
IFNULL function, 205
IF STATEMENT, 161
INFORMATION_SCHEMA_CATALOG_NAME, 83
init script, 277
IN predicate, 119
INSERT function, 184
INSERT INTO, 142
INSTR function, 184
interval absolute value function, 117
interval term, 116
INTERVAL types, 22
IS_AUTOCOMMIT function, 208
IS_READONLY_DATABASE_FILES function, 208
IS_READONLY_DATABASE function, 208
IS_READONLY_SESSION function, 208
IS DISTINCT predicate, 123
ISNULL function, 205
IS NULL predicate, 120
ISOLATION_LEVEL function, 208
J
JOIN USING, 133
JOIN with condition, 133
K
KEY_COLUMN_USAGE, 83
M
MATCH predicate, 121
MAX_CARDINALITY function, 203
memory use, 284
MERGE INTO, 144
MINUTE function, 196
MOD function, 191
MONTH function, 196
MONTHNAME function, 196
MONTHS_BETWEEN function, 198
N
name resolution, 135
naming in joined table, 135
naming in select list, 135
NATURAL JOIN, 133
NEXT VALUE FOR, 114
NOW function, 194
NULLIF function, 206
NULL INPUT, 151
numeric literal, 108
NUMERIC types, 12
numeric value expression, 115
numeric value function, 115
NVL2 function, 206
NVL function, 206
O
OCTET_LENGTH function, 185
ON UPDATE clause, 61
322
General Index
OTHER type, 17
OUTER JOIN, 134
OVERLAPS predicate, 122
OVERLAY function, 186
P
PARAMETERS, 83
password complexity, 218
PATH, 126
PI function, 191
POSITION_ARRAY function, 203
POSITION function, 186
POWER function, 191
PRECEDES predicate, 123
PRIMARY KEY constraint, 61
PUBLIC ROLE, 95
Q
QUARTER function, 196
R
RADIANS function, 191
RAND function, 191
RAWTOHEX function, 186
REFERENTIAL_CONSTRAINTS, 83
REGEXP_MATCHES function, 186
REGEXP_REPLACE function, 186
REGEXP_SUBSTRING_ARRAY function, 187
REGEXP_SUBSTRING function, 186
RELEASE SAVEPOINT, 41
RENAME, 56
REPEAT ... UNTIL loop in routines, 159
REPEAT function, 187
REPLACE function, 187
RESIGNAL STATEMENT, 163
RETURN, 162
RETURNS, 148
REVERSE function, 187
REVOKE, 100
REVOKE ROLE, 100
RIGHT function, 187
ROLE_AUTHORIZATION_DESCRIPTORS, 83
ROLE_COLUMN_GRANTS, 83
ROLE_ROUTINE_GRANTS, 83
ROLE_TABLE_GRANTS, 83
ROLE_UDT_GRANTS, 83
ROLE_USAGE_GRANTS, 83
ROLLBACK, 41
ROLLBACK TO SAVEPOINT, 41
ROUND function datetime, 200
ROUND number function, 192
ROUTINE_COLUMN_USAGE, 83
ROUTINE_JAR_USAGE, 83
ROUTINE_PRIVILEGES, 84
ROUTINE_ROUTINE_USAGE, 84
ROUTINE_SEQUENCE_USAGE, 84
ROUTINE_TABLE_USAGE, 84
routine body, 149
routine invocation, 126
ROUTINES, 84
ROW_NUMBER function, 209
ROWNUM function, 209
row value expression, 112
RPAD function, 187
RTRIM function, 187
S
SAVEPOINT, 41
SAVEPOINT LEVEL, 151
schema routine, 72
SCHEMATA, 84
SCRIPT, 220
search condition, 126
SECOND function, 196
SECONDS_SINCE_MIDNIGHT function, 196
security, 6, 265, 268
SELECT, 128
SELECT : SINGLE ROW, 158
SEQUENCE_ARRAY function, 203
SEQUENCES, 84
SESSION_ID function, 208
SESSION_ISOLATION_LEVEL function, 208
SESSION_TIMEZONE function, 193
SESSION_USER function, 207
SESSIONTIMEZONE function, 193
SET AUTOCOMMIT, 39
SET CATALOG, 43
set clause in UPDATE and MERGE statements, 144
SET CONSTRAINTS, 40
SET DATABASE AUTHENTICATION FUNCTION,
237
SET DATABASE COLLATION, 221
SET DATABASE DEFAULT INITIAL SCHEMA, 98
SET DATABASE DEFAULT ISOLATION LEVEL,
224
SET DATABASE DEFAULT RESULT MEMORY
ROWS, 221
SET DATABASE DEFAULT TABLE TYPE, 222
SET DATABASE EVENT LOG LEVEL, 222
SET DATABASE GC, 223
SET DATABASE LIVE OBJECT, 230
SET DATABASE PASSWORD CHECK FUNCTION,
236
SET DATABASE SQL AVG SCALE, 228
SET DATABASE SQL CHARACTER LITERAL, 227
SET DATABASE SQL CONCAT NULLS, 227
323
General Index
324
General Index
SYSTEM_UDTS, 87
SYSTEM_USER function, 207
SYSTEM_USERS, 87
SYSTEM_VERSIONCOLUMNS, 87
SYSTIMESTAMP function, 195
SYS USER, 95
T
TABLE_CONSTRAINTS, 84
TABLE_PRIVILEGES, 85
Table Function Derived Table, 132
TABLES, 84
TAN function, 192
TIMESTAMP_WITH_ZONE function, 201
TIMESTAMPADD function, 198
TIMESTAMPDIFF function, 198
TIMESTAMP function, 200
Time Zone, 19
TIMEZONE function, 193
TO_CHAR function, 201
TO_DATE function, 201
TO_NUMBER function, 192
TO_TIMESTAMP function, 201
TODAY function, 195
TRANSACTION_CONTROL function, 209
TRANSACTION_ID function, 209
TRANSACTION_SIZE function, 209
transaction characteristics, 40
TRANSLATE function, 188
TRANSLATIONS, 85
TRIGGER_COLUMN_USAGE, 85
TRIGGER_ROUTINE_USAGE, 85
TRIGGER_SEQUENCE_USAGE, 85
TRIGGER_TABLE_USAGE, 85
TRIGGERED_UPDATE_COLUMNS, 85
TRIGGERED SQL STATEMENT, 180
TRIGGER EXECUTION ORDER, 180
TRIGGERS, 85
TRIM_ARRAY function, 203
TRIM function, 188
TRUNCATE function, 192
TRUNCATE SCHEMA, 142
TRUNCATE TABLE, 141
TRUNC function datetime, 200
TRUNC function numeric, 192
V
value expression, 115
value expression primary, 111
value specification, 112
VIEW_COLUMN_USAGE, 85
VIEW_ROUTINE_USAGE, 85
VIEW_TABLE_USAGE, 85
VIEWS, 85
W
WEEK function, 197
WHILE loop in routines, 159
WIDTH_BUCKET function, 193
Y
YEAR function, 197
U
UCASE function, 188
unicode escape elements, 106
UNION JOIN, 133
UNIQUE constraint, 61
UNIQUE predicate, 121
UNIX_MILLIS function, 196
325