SQLWorkbench Manual
SQLWorkbench Manual
Table of Contents
1. General Information ............................................................................................................................. 7
1.1. Program version ........................................................................................................................ 7
1.2. Feedback and support ................................................................................................................ 7
1.3. Credits and thanks ..................................................................................................................... 7
1.4. Third party components .............................................................................................................. 7
2. Software license .................................................................................................................................. 9
2.1. Definitions ............................................................................................................................... 9
2.2. Grant of Copyright License ......................................................................................................... 9
2.3. Restrictions (deviation of the Apache License) ............................................................................. 10
2.4. Grant of Patent License ............................................................................................................ 10
2.5. Redistribution .......................................................................................................................... 10
2.6. Submission of Contributions ...................................................................................................... 11
2.7. Trademarks ............................................................................................................................. 11
2.8. Disclaimer of Warranty. ............................................................................................................ 11
2.9. Limitation of Liability .............................................................................................................. 11
2.10. Accepting Warranty or Additional Liability ................................................................................ 11
3. Change log ....................................................................................................................................... 12
4. Installing and starting SQL Workbench/J ............................................................................................... 15
4.1. Pre-requisites .......................................................................................................................... 15
4.2. First time installation ............................................................................................................... 15
4.3. Upgrade installation ................................................................................................................. 15
4.4. Starting the program from the commandline ................................................................................. 15
4.5. Starting the program using the shell script ................................................................................... 16
4.6. Starting the program using the Windows launcher ....................................................................... 16
4.7. Configuration directory ............................................................................................................. 17
4.8. Copying an installation ............................................................................................................. 18
4.9. Increasing the memory available to the application ........................................................................ 18
5. Command line parameters ................................................................................................................... 19
5.1. Specify the directory for configuration settings ............................................................................. 19
5.2. Specify a base directory for JDBC driver libraries ......................................................................... 19
5.3. Specify the file containing connection profiles .............................................................................. 19
5.4. Defining variables .................................................................................................................... 20
5.5. Prevent updating the .settings file ............................................................................................... 20
5.6. Connect using a pre-defined connection profile ............................................................................. 20
5.7. Connect without a profile .......................................................................................................... 21
6. JDBC Drivers .................................................................................................................................... 24
6.1. Configuring JDBC drivers ......................................................................................................... 24
6.2. Specifying a library directory .................................................................................................... 25
6.3. Popular JDBC drivers ............................................................................................................... 25
7. Connecting to the database .................................................................................................................. 27
7.1. Connection profiles .................................................................................................................. 27
7.2. Managing profile groups ........................................................................................................... 27
7.3. JDBC related profile settings ..................................................................................................... 28
7.4. PostgreSQL connections ........................................................................................................... 29
7.5. Extended properties for the JDBC driver ..................................................................................... 29
7.6. SQL Workbench/J specific settings ............................................................................................. 29
7.7. Connect to Oracle with SYSDBA privilege .................................................................................. 34
7.8. Using the quick filter ............................................................................................................... 34
8. Using workspaces .............................................................................................................................. 35
35
35
35
36
37
37
37
38
38
38
39
40
43
43
43
43
44
44
44
45
47
47
47
47
49
50
52
52
53
53
53
54
55
55
56
57
57
58
59
61
61
61
62
62
63
63
63
64
64
64
64
66
67
67
67
16.
17.
18.
19.
20.
21.
113
117
120
120
121
122
122
124
124
127
129
130
130
130
132
132
135
140
140
142
143
145
146
146
146
146
147
148
150
151
152
152
152
153
153
154
154
155
156
157
157
159
159
159
160
160
160
160
160
161
161
161
162
164
164
25.
26.
27.
28.
29.
30.
164
165
165
165
166
166
166
166
168
168
168
168
170
171
171
173
174
174
175
175
176
176
176
177
179
179
179
179
179
182
182
182
182
182
183
183
183
183
183
184
184
185
186
188
190
190
191
191
193
194
196
196
197
197
198
198
199
202
203
203
204
206
207
210
211
212
212
213
213
213
213
214
214
214
215
216
217
218
218
223
224
226
227
228
230
231
231
232
233
234
235
237
1. General Information
1.1. Program version
This document describes build 119 of SQL Workbench/J
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
1.4.3. Editor
The editor is based on the JEdit Syntax package: http://sourceforge.net/projects/jedit-syntax/
The jEdit 2.2.1 syntax highlighting package contains code that is Copyright 1998-1999 Slava Pestov, Artur
Biesiadowski, Clancy Malcolm, Jonathan Revusky, Juha Lindfors and Mike Dillon.
1.4.6. Icons
Some icons are taken from Tango project: http://tango.freedesktop.org/Tango_Icon_Library
Some icons are taken from KDE Crystal project: http://www.everaldo.com/crystal/
Some icons are taken from Yusuke Kamiyamane's Fugue Icons: http://p.yusukekamiyamane.com/
Some icons are taken from glyFX Image Library: http://www.glyfx.com
Some icons are taken from FatCow: http://www.fatcow.com/free-icons
The DbExplorer icon is from the icon set "Mantra" by Umar Irshad: http://umar123.deviantart.com/
2. Software license
Copyright 2002-2016, Thomas Kellerer
This software is licensed under a modified version of the Apache License, Version 2.0 http://sql-workbench.net/manual/
license.html that restricts the use of the software for certain organizations.
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
2.1. Definitions
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9
of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under
common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to
cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent
(50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source
code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including
but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as
indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix
below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the
Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an
original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications
or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the
Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner.
For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source
code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose
of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been
received by Licensor and subsequently incorporated within the Work.
2.5. Redistribution
You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You meet the following conditions:
1. You are not subject to the restrictions
2. You must give any other recipients of the Work or Derivative Works a copy of this License; and
3. You must cause any modified files to carry prominent notices stating that You changed the files; and
4. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark,
and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of
the Derivative Works; and
5. If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute
must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices
that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE
text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along
with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party
notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the
10
License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an
addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed
as modifying the License. You may add Your own copyright statement to Your modifications and may provide
additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise
complies with the conditions stated in this License.
2.7. Trademarks
This License does not grant permission to use the trade names, trademarks, service marks, or product names of the
Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the
content of the NOTICE file.
11
3. Change log
Changes from build 118 to build 119
Enhancements
It's now possible to display the column's data type in the header of the result table
When importing all files from a directory, WbImport now provided pre-defined variables with the filename
It's now possible to configure auto-saving of external files independently from auto-saving the workspace.
A new command WbMessage to display a simple message is available.
It's not possible to search text in all open editors (Tools -> Search all tabs)
For some DBMS, the DbTree and DbExplorer now show the dependencies between objects
For DB2/iSeries table and column comments can now be retrieved from the system catalogs instead of using the
JDBC driver.
For Postgres it is now possible to manually control transactions using BEGIN .. COMMIT when the connection is set
to autocommit
The list of tables in the DbTree is now sorted to work around bugs in JDBC driver that do not properly sort the list of
tables
For DBMS that support it, two new modes have been added to WbImport: -upsert and -insertIgnore using native
"UPSERT" functionality if available
For Firebird the SQL source of external tables is now generated correctly.
An new command WbGenerateImpTable is available to infer the structure of a table from an import file
The command line parameter -vardef has been deprecated and replaced with -variable and -varFile
It's now possible to provide tags for each connection profile. The quickfilter will then use the defined tags for
filtering the displayed profiles.
Connection parameters specified on the command line now have precedence over the properties defined through a
Liquibase defaults file (specified through -lbDefaults)
It's now possible to enable the use of Oracle's DBMS_METADATA for source code retrieval for different types of
objects
The tooltip shown for result tabs can now be configured (Options -> Data display)
For the internal SQL formatter, it's now possible to configure the case in which data type names are written
A new action to run all SQL statements up to the cursor position is available
The error dialog that is displayed when running a script can now be configured to also include the error message or
the statement that failed
Improved display of packages in the DbTree for Oracle and Firebird
Bug fixes
Showing rowcounts in the DbTree did not work for DB2
12
"Generate Delete Script" for a selection of rows in the result did not display the generated script.
When reloading the whole DbTree while a node was selected, would show elements (e.g. tables) twice
CREATE TABLE statements were not formatted correctly if the name consisted of quoted and unquoted parts (e.g.
unquoted schema and quoted table name)
The error dialog when running running multiple statements was not displayed on Linux if the option "Include error
message" was selected in the "SQL Execution" options
Improved the performance when retrieving table definitions and table source for Oracle
For Postgres, rules defined on a table where shown twice in the generated DDL script
Retrieving additional column information in the DbExplorer failed on SQL Server if a non-standard database
collation was used
The DDL for constraints or comments where identifiers required quoting was not correct
The formatter would not process statements correctly where a sub select using function calls in the WHERE clause
was used in a JOIN condition
When using "Remove Comments" for a connection profile, the error position inside a statement was not shown
correctly for some DBMS
For Oracle, when using "Trim CHAR data" and editing tables where the primary key column was defined as CHAR,
updating the result did not work.
Toggle comment did not toggle correctly when some lines were already commented and some not
The messages shown when using conditional execution with WbInclude did not properly include the variable name
or value
For Oracle the tablespace of materialized views was not shown in the generated SQL (Fix contributed by Franz
Mayer)
It was not possible to work with SAVEPOINTs correctly
Table definitions for tables with VARCHAR columns were not displayed for Oracle
Disabling the check for read-only columns did not work for all JDBC drivers
WbCopy now stops with an error if -targetTable is specified and -sourceTable is used to specify multiple tables
For DB2 the names of PK constraints where not properly qualified with a schema if needed
Sometimes using "Execute current" would not correctly identify the current statement and run the first statement
from the editor
WbImport using -insert,update did not work for multi-column primary keys when not all PK columns where part of
the input file
When a variable value contained the prefix and the suffix of the variable pattern, using such a variable would result in
SQL Workbench/J locking up
Reloading a trigger source in the DbExplorer's trigger panel did not work
For Oracle the source of a trigger that had a trailing space in the name was not retrieved
For Oracle the position of errors in regular (non-PL/SQL) DDL statements was not shown
13
When starting SQL Workbench on a headless system, using "java -jar" using the -script parameter did not work any
longer
For SQL Server 2000, retrieving the source of a view did not work
For SQL Server, generating "dummy DML" for tables with "bit" columns did not work
For MySQL the option "on update" for a default value was not shown in the generated SQL source for a table
The full release history is available at the SQL Workbench/J homepage
14
15
Native executables for Windows and Mac OSX are supplied that start SQL Workbench/J by using the default Java
runtime installed on your system. Details on using the Windows launcher can be found here.
16
17
18
19
When specifying a properties file with -profileStorage the file extension must be .properties
20
You can specify the name of an already created connection profile on the command line with the profile=<profile name> parameter. The name has to be passed exactly like it appears in the profile dialog (case
sensitive!). If the name contains spaces or dashes, it has to be enclosed in quotations marks. If you have more than one
profile with the same name but in different profile groups, you have to specify the desired profile group using the profilegroup parameter, otherwise the first profile matching the passed name will be selected.
Example (on one line):
java -jar sqlworkbench.jar
-profile='PostgreSQL - Test'
-script='test.sql'
In this case the file WbProfiles.xml must be in the current (working) directory of the application. If this is not the
case, please specify the location of the profile using either the -profileStorage or -configDir parameter.
If you have two profiles with the names "Oracle - Test" you will need to specify the profile group as well (in one
line):
java -jar sqlworkbench.jar
-profile='PostgreSQL - Test'
-profilegroup='Local'
-script='test.sql'
You can also store the connection profiles in a properties file and specify this file using the -profileStorage
parameter.
Description
-url
-username
-password
-driver
-driverJar
Specify the full pathname to the .jar file containing the JDBC driver
-autocommit
Set the autocommit property for this connection. You can also control the autocommit mode
from within your script by using the SET AUTOCOMMIT command.
-rollbackOnDisconnect If this parameter is set to true, a ROLLBACK will be sent to the DBMS before the connection
is closed. This setting is also available in the connection profile.
-checkUncommitted
If this parameter is set to true, SQL Workbench/J will try to detect uncommitted changes in
the current transaction when the main window (or an editor panel) is closed. If the DBMS
does not support this, this argument is ignored. It also has no effect when running in batch or
console mode.
21
Parameter
Description
-trimCharData
Turns on right-trimming of values retrieved from CHAR columns. See the description of the
profile properties for details.
-removeComments
This parameter corresponds to the Remove comments setting of the connection profile.
-fetchSize
This parameter corresponds to the Fetch size setting of the connection profile.
-ignoreDropError
This parameter corresponds to the Ignore DROP errors setting of the connection profile.
-altDelimiter
This parameter corresponds to the Alternate delimiter setting of the connection profile.
-emptyStringIsNull
This parameter corresponds to the Empty String is NULL setting of the connection profile.
This will only be needed when editing a result set in GUI mode.
-connectionProperties
This parameter can be used to pass extended connection properties if the driver does not
support them e.g. in the JDBC URL. The values are passed as key=value pairs, e.g. connectionProperties=someProp=42
If either a comma or an equal sign occurs in a parameter's value, it must be quoted. This
means, when passing multiple properties the whole expression needs to be quoted: connectionProperties='someProp=42,otherProp=24'.
As an alternative, a colon can be used instead of the equals sign, e.g connectionProperties=someProp:42,otherProp:24. In this case no quoting is
needed (because no delimiter is part of the parameters value).
If any of the property values contain a comma or an equal sign, then the
whole parameter value needs to be quoted again, even when using a colon. connectionProperties='someProp:"answer=42",otherProp:"2,4"' will
define the value answer=42 for the property someProp and the value 2,4 for the property
otherProp.
-altDelim
The alternate delimiter to be used for this connection. e.g. -altDelimiter=GOl to define
a SQL Server like GO as the alternate delimiter. Note that when running in batchmode you
can also override the default delimiter by specifying the -delimiter parameter.
-separateConnection
If this parameter is set to true, and SQL Workbench/J is run in GUI mode, each SQL tab will
use it's own connection to the database server. This setting is also available in the connection
profile. The default is true.
-connectionName
When specifying a connection without a profile (only using -username, -password and
so on) then the name of the connection can be defined using this parameter. The connection
name will be shown in the title of the main window if SQL Workbench/J is started in GUI
mode. The parameter does not have any visible effect when running in batch or console
mode.
-workspace
The workspace file to be loaded. If the file specification does not include a directory, the
workspace will be loaded from the configuration directory. If this parameter is not specified,
the default workspace (Default.wksp) will be loaded.
-readOnly
Description
-connection
Allows to specify a full connection definition as a single parameter (and thus does not require
a pre-defined connection profile).
The connection is specified with a comma separated list of key value pairs:
username - the username for the connection
22
Parameter
Description
password - the password for the connection
url - the JDBC URL
driver - the class name for the JDBC driver. If this is not specified, SQL Workbench/J
will try to determine the driver from the JDBC URL
driverJar - the full path to the JDBC driver. This not required if a driver for the
specified class is already configured
e.g.: "username=foo,password=bar,url=jdbc:postgresql://
localhost/mydb"
If an approriate driver is already configured the driver's classname or the JAR file don't have
to be specified.
If an approriate driver is not configured, the driver's jar file must be specified:
"username=foo,password=bar,url=jdbc:postgresql://localhost/
mydb,driverjar=/etc/drivers/postgresql.jar"
SQL Workbench/J will try to detect the driver's classname automatically (based on the JDBC
URL).
If this parameter is specified, -profile is ignored.
The individual parameters controlling the connection behaviour can be used together with connection, e.g. -autocommit or -fetchSize
In addition to -connection> the following parameters are also supported to specify
connections for WbCopy, WbDataDiff or WbSchemaDiff:
-sourceConnection
-targetConnection
-referenceConnection
If a value for one of the parameters contains a dash or a space, you will need to quote the parameter value.
A disadvantage of this method is, that the password is displayed in plain text on the command line. If this is used in a
batch file, the password will be stored in plain text in the batch file. If you don't want to expose the password, you can
use a connection profile and enable password encryption for connection profiles.
23
6. JDBC Drivers
6.1. Configuring JDBC drivers
Before you can connect to a DBMS you have to configure the JDBC driver to be used. The driver configuration is
available in the connection dialog or through File Manage Drivers
The JDBC driver is a file with the extension .jar (some drivers need more than one file). See the end of this section
for a list of download locations. Once you have downloaded the driver you can store the driver's .jar file anywhere you
like.
To register a driver with SQL Workbench/J you need to specify the following details:
24
Driver class
Library name
PostgreSQL org.postgresql.Driver
postgresql-9.4-1203.jdbc4.jar (exact
name depends on PostgreSQL version)
http://jdbc.postgresql.org
Firebird
SQL
org.firebirdsql.jdbc.FBDriver
firebirdsql-full.jar
http://www.firebirdsql.org/
Oracle
oracle.jdbc.OracleDriver
ojdbc7.jar
http://www.oracle.com/technetwork/database/
features/jdbc/index-091264.html
H2 Database org.h2.Driver
Engine
h2.jar
http://www.h2database.com
HSQLDB
org.hsqldb.jdbcDriver
hsqldb.jar
http://hsqldb.sourceforge.net
IBM DB2
com.ibm.db2.jcc.DB2Driver
db2jcc4.jar
http://www-01.ibm.com/software/data/db2/linuxunix-windows/download.html
IBM DB2
for iSeries
com.ibm.as400.access.AS400JDBCDriverjt400.jar
http://jt400.sourceforge.net/
Apache
Derby
org.apache.derby.jdbc.EmbeddedDriverderby.jar
http://db.apache.org/derby/
Teradata
com.teradata.jdbc.TeraDriver
terajdbc4.jar
http://www.teradata.com/DownloadCenter/
Forum158-1.aspx
jconnect.jar
http://www.sybase.com/products/allproductsa-z/
softwaredeveloperkit/jconnect
jtds.jar
http://jtds.sourceforge.net
MySQL
com.mysql.jdbc.Driver
25
DBMS
Driver class
Library name
http://www.mysql.com/downloads/connector/j/
26
Profile". The new profile will be created in the currently active group. The other properties will be empty. To create
a copy of the currently selected profile click on the Copy Profile button (
). The copy will be created in the
current group. If you want to place the copy into a different group, you can either choose to Copy & Paste a copy of the
profile into that group, or move the copied profile, once it is created.
To delete an existing profile, select the profile in the list and click on the Delete Profile button (
You can move profiles from one group to another but right clicking on the profile, then choose Cut. Then right-click
on the target group and select Paste from the popup menu. If you want to put the profile into a new group that is not yet
created, you can choose Paste to new folder. You will be prompted to enter the new group name.
If you choose Copy instead of Cut, a copy of the selected profile will be pasted into the target group. This is similar to
copying the currently selected profile.
To rename a group, select the node in the tree, then press the F2 key. You can now edit the group name.
To delete a group, simply remove all profiles from that group. The group will then automatically be removed.
27
7.3.2. URL
The connection URL for your DBMS. This value is DBMS specific. The pre-configured drivers from SQL Workbench/
J contain a sample URL. If the sample URL (which gets filled into the text field when you select a driver class) contains
words in brackets, then these words (including the brackets) are placeholders for the actual values. You have to replace
them (including the brackets) with the appropriate values for your DBMS connection.
7.3.3. Username
This is the name of the DBMS user account
You can use placeholders in the username property that get replaced with operating system environment variables or
Java properties. E.g. ${user.name} will be replaced with the current operating system user - this works on any
operating system as the variable is supplied by the Java runtime. ${USERNAME} would be replaced with the current
username on Windows. you can combine this with fixed text, e.g. DEV_${user.name} or TEST_${user.name}.
7.3.4. Password
This is the password for your DBMS user account. You can choose not to store the password in the connection profile.
7.3.5. Autocommit
This check box enables/disables the "auto commit" property for the connection. If autocommit is enabled, then
each SQL statement is automatically committed on the DBMS. If this is disabled, any DML statement (UPDATE,
INSERT, DELETE, ...) has to be committed in order to make the change permanent. Some DBMS require a
commit for DDL statements (CREATE TABLE, ...) as well. Please refer to the documentation of your DBMS.
7.3.7. Timeout
28
This property defines a timeout in seconds that is applied when establishing the connection to the database server. If no
connection is possible in that time, the attempt will be aborted. If this is empty, the default timeout defined by the JDBC
driver is used.
Username
If no username is specified in the connection profile, SQL Workbench/J will first check the environment variable
PGUSER, if that is not defined, the current operating system user will be used.
Password
If no password is specified and the saving of the password is disabled, SQL Workbench/J will first check the
environment variable PGPASSWORD. If that is not defined, SQL Workbench/J will look for a Postgres password file. If
that exists and the host, database, port and user are matched in the password file, the stored password will be used.
).
Some driver require those properties to be so called "System properties" (see the manual of your driver for details). If
this is the case for your driver, check the option Copy to system properties before connecting.
29
30
Note that for some DBMS (e.g. MS SQL Server) server messages (PRINT 'Hello, world') are also returned as a
warning by the driver. If you disable this property, those messages will also not be displayed.
If you hide warnings when connected to a PostgreSQL server, you will also not see messages that are returned e.g. by
the VACUUM command.
31
32
7.6.18. Workspace
For each connection profile, a workspace file can (and should) be assigned. When you create a new connection, you can
either leave this field empty or supply a name for a new profile.
If the profile that you specify does not exist, you will be prompted if you want to create a new file, load a different
workspace or want to ignore the missing file. If you choose to ignore, the association with the workspace file will be
cleared and the default workspace will be loaded.
If you choose to leave the workspace file empty, or ignore the missing file, you can later save your workspace to a new
file. When you do that, you will be prompted if you want to assign the new workspace to the current connection profile.
To save you current workspace choose Workspace Save Workspace as to create a new workspace file.
If you specify a filename that does not contain a directory or is a relative filename, it is assumed the workspace is stored
in configuration directory.
As the workspace stores several settings that are related to the connection (e.g. the selected schema in the
DbExplorer) it is recommended to create one workspace for each connection profile.
33
34
8. Using workspaces
8.1. Overview
A workspace is a collection of editor tabs that group scripts or statement together. A workspace stores the name of each
editor tab, the cursor position for each editor, the selection and the statement history.
Each connection profile is assigned a workspace. If no workspace is explicitely chosen for a connection profile, a
workspace with the name Default is used. If not specified otherwise, workspaces are stored in the configuration
directory.
A workspace file has the extension .wksp and is a regular ZIP archive that can be opened with any ZIP tool. It
contains one text file for each editor in the workspace and some property files that store additional settings like the
divider location, the Max. Rows value or the selected catalog and schema of the DbExplorer.
It is recommended to use a different workspace for each connection profile.
Workspaces can be used to reduce the number of editor tabs being used. You can create different workspaces for
different topics you work on. One workspace that contains queries to monitor a database. One workspace that contains
everything related to a specific feature you are working on. One workspace to initialize a new environment and so on.
35
36
37
38
39
45
will be converted to:
(42, 43, 44, 45)
These two functions will only be available when text is selected which spans more then one line.
40
41
This feature requires that the getParameterCount() and getParameterType() methods of the
ParameterMetaData class are implemented by the JDBC driver and return the correct information
about the available parameters.
The following drivers have been found to support (at least partially) this feature:
PostgreSQL, driver version 8.1-build 405
H2 Database Engine, Version 1.0.73
Apache Derby, Version 10.2
Firebird SQL, Jaybird 2.0 driver
HSQLDB, version 1.8.0
Drivers known to not support this feature:
Oracle 11g driver (ojdbc6.jar, ojdbc7.jar)
Microsoft SQL Server 2000/2005 driver (sqljdbc4.jar)
42
The keyword is not case sensitive, @wbtag will work just as wel as @WBTAG, or @WbTag. A multiline comment can be
used as well as a single line comment.
The annotations for naming a result can additionally be included in the bookmark list. This is enabled in the options
panel for the editor.
The names of procedures and functions can also be used as bookmarks if enabled in the bookmark options
If the option Only for current tab is enabled, then the dialog will open showing only bookmarks for the current tab.
43
11.1. PostgreSQL
The body of a function in Postgres is a character literal. Because a delimiter inside a character literal does not define the
end of the statement, no special treatment is needed for Postgres.
44
The following script will therefore work when connected to an Oracle database:
drop table sometable cascade constraints;
create table sometable
(
col1 integer not null
);
create or replace function proc_sample return integer
is
l_result integer;
begin
select max(col1) into l_result from sometable;
return l_result;
end;
/
not
not
not
not
not
45
/
The solution is to terminate every statement with the alternate delimiter:
create table orders
(
order_id
integer
customer_id integer
product_id integer
pieces
integer
order_date date
)
/
not
not
not
not
not
46
47
48
Statement history
When executing a statement the contents of the editor is put into an internal buffer together with the information about
the text selection and the cursor position. Even when you select a part of the current text and execute that statement, the
whole text is stored in the history buffer together with the selection information. When you select and execute different
parts of the text and then move through the history you will see the selection change for each history entry.
The previous statement can be recalled by pressing Alt-Left or choosing SQL Previous Statement statement from the
menu. Once the previous statement(s) have been recalled the next statement can be shown using Alt-Right or choosing
SQL Next Statement from the menu. This is similar to browsing through the history of a web browser.
You can clear the statement history for the current tab, but selecting SQL Clear history
When you clear the content of the editor (e.g. by selecting the whole text and then pressing the Del key)
this will not clear the statement history. When you load the associated workspace the next time, the editor
will automatically display the last statement from the history. You need to manually clear the statement
history, if you want an empty editor the next time you load the workspace.
49
When you run a SQL statement, the current results will be cleared and replaced by the new results. You can turn this
off by selecting SQL Settings Append new results. Every result that is retrieved while this option is turned on, will
be added to the set of result tabs, until you de-select this option. This can also be toggled using the button (
) on the
toolbar. Additional result tabs can be closed using Data Close result. You can configure the default behavior for new
editor tabs in the options dialog.
You can also run stored procedures that return result sets. These result will be displayed in the same way. For DBMS's
that support multiple result sets from a single stored procedure (e.g. Microsoft SQL Server), one result tab will be
displayed for each result returned.
50
51
When displaying the BLOB content as a text, you can edit the text. When saving the data, the entered text will be
converted to raw data using the selected encoding.
The window will also let you open the contents of the BLOB data with a predefined external tool. The tools that are
defined in the options dialog can be selected from a drop down. To open the BLOB content with one of the tools,
select the tool from the drop down list, then click on the button Open with next to the external tools drop down. SQL
Workbench/J will then retrieve the BLOB data from the server, store it in a temporary file on your hard disk, and run
the selected application, passing the temporary file as a parameter.
From within this information dialog, you can also upload a file to be stored in that BLOB column. The file contents will
not be sent to the database server until you actually save the changes to your result set (this is the same for all changes
you make directly in the result set, for details please refer to Editing the data)
When using the upload function in the BLOB info dialog, SQL Workbench/J will use the file content for
any subsequent display of the binary data or the the size information in the information dialog. You will
need to re-retrieve the data, in order to use the blob data from the server.
52
The workspace file itself is a normal ZIP file, which contains one file with the statement history for each tab. The
individual files can be extracted from the workspace using your favorite UNZIP tool.
12.10.2. Oracle
For Oracle the DBMS_OUTPUT package is supported. Support for this package can be turned on with the
ENABLEOUT command. If this support is not turned on, the messages will not be displayed. This is the same as using
the SET SERVEROUTPUT ON command in SQL*Plus.
53
If you want to turn on support for DBMS_OUTPUT automatically when connecting to an Oracle database, you can put
the set serveroutput on command into the pre-connect script.
Any message "printed" with DBMS_OUTPUT.put_line() will be displayed in the message part after the SQL
command has finished. Please refer to the Oracle documentation if you want to learn more about the DBMS_OUTPUT
package.
dbms_output.put_line("The answer is 42");
Once the command has finished, the following will be displayed in the Messages tab.
The answer is 42
54
If the update is successful (no database errors) a COMMIT will automatically be sent to the database (if autocommit is
turned off).
If your SELECT was based on more than one table, you will be prompted to specify which table should be updated.
It cannot be detected reliably which column belongs to which of the tables from the select statement. When updating
a result from multiple tables, only columns from the chose update table should be changed, otherwise incorrect SQL
statements will be generated.
If no primary (or unique) key can be found for the update table, you will be prompted to select the columns that should
be used to uniquely identify a row in the update table.
If an error is reported during the update, a ROLLBACK will automatically be sent to the database. The COMMIT or
ROLLBACK will only be sent if autocommit is turned off.
Columns containing BLOB data will be displayed with a ... button. By clicking on that button, you can view the blob
data, save it to a file or upload the content of a file to the DBMS. Please refer to BLOB support for details.
When editing, SQL Workbench/J will highlight columns that are defined as NOT NULL in the database. You can turn
this feature off, or change the color that is used in the options dialog.
When editing date, timestamp or time fields, the format specified in the options dialog is used for parsing
the entered value and converting that into the internal representation of a date. The value entered must
match the format defined there.
If you want to input the current date and time you can use now, today, sysdate, current_timestamp,
current_date instead. This will then use the current date & time and will convert this to the approriate data type for
that column. e.g. now will be converted to the current time for a time column, the current date for a date column and
the current date/time for a timestamp column. These keywords also work when importing text files using WbImport or
importing a text file into the result set. The exact keywords that are recognized can be configure in the settings file
If the option Empty String is NULL is disabled for the current connection profile, you can still set a column's value to
null when editing it. To do this, double click the current value, so that you can edit it. In the context menu (right mouse
button) the option "Set to NULL" is available. This will clear the value and set it to NULL. You can assign a shortcut to
this action, but the shortcut will only be active when editing a value inside a column.
55
If you want to sort by more than one column, hold down the Ctrl key will clicking on the (second) header. The initial
sort order is ascending for that additional column. To switch the sort order hold down the Ctrl key and click on the
column header again. The sort order for all "secondary" sort columns will be indicated with a slightly smaller triangle
than the one for the primary sort column.
To define a different secondary sort column, you first have to remove the current secondary column. This can be done
by holding down the Shift key and clicking on the secondary column again. Note that the data will not be resorted.
Once you have removed the secondary column, you can define a different secondary sort column.
By default SQL Workbench/J will use "ASCII" sorting which is case-sensitive and will not sort special characters
according to your language. You can change the locale that is used for sorting data in the options dialog under the
category "Data Display". Sorting using a locale is a bit slower than "ASCII" sorting.
56
Using the Alt key you can select individual columns of one or more rows. Together with the Ctrl key you can select
e.g. the first, third and fourth column. You can also select the e.g. second column of the first, second and fifth row.
Whether the quick filter is available depends on the selected rows and columns. It will be enabled when:
You have selected one or more columns in a single row
You have selected one column in multiple rows
If only a single row is selected, the quick filter will use the values of the selected columns combined with AND to define
the filter (e.g. username = 'Bob' AND job = 'Clerk'). Which columns are used depends on the way you select the row
and columns. If the whole row in the result is selected, the quick filter will use the value of the focused column (the one
with the yellow rectangle), otherwise the individually selected columns will be used.
If you select a single column in multiple rows, this will create a filter for that column, but with the values will be
combined with OR (e.g. name = 'Dent' OR name = 'Prefect'). The quick filter will not be available if you select more
than one column in multiple rows.
Once you have applied a quick filter, you can use the regular filter definition dialog to check the definition of the filter
or to further modify it.
57
In order to write the proprietary Microsoft Excel format, additional libraries are needed. Please refer to Exporting Excel
files for details.
To save the data from the current result set into an external file, choose Data Save Data as You will be prompted for
the filename. On the right side of the file dialog you will have the possibility to define the type of the export. The export
parameters on the right side of the dialog are split into two parts. The upper part defines parameters that are available
for all export types. These are the encoding for the file, the format for date and date/time data and the columns that
should be exported.
All format specific options that are available in the lower part, are also available when using the WbExport command.
For a detailed discussion of the individual options please refer to that section.
The options SQL UPDATE and SQL DELETE/INSERT are only available when the current result has a single table
that can be updated, and the primary key columns for that table could be retrieved. If the current result does not have
key columns defined, you can select the key columns that should be used when creating the file. If the current result is
retrieved from multiple tables, you have to supply a table name to be used for the SQL statements.
Please keep in mind that exporting the data from the result set requires you to load everything into memory. If you need
to export data sets which are too big to fit into memory, you should use the WbExport command to either create SQL
scripts or to save the data as text or XML files that can be imported into the database using the WbImport command.
You can also use SQL Export query result to export the result of the currently selected SQL statement.
58
You can also use WbExport together with the -stylesheet parameter and the suppplied stylesheets
wbexport2dbunit.xslt and wbexport2dbunitflat.xslt to generate DbUnit XML files from data
already present in the database (in that case no DbUnit libraries are needed).
As with the Save Data as command, the options SQL UPDATE and SQL DELETE/INSERT are only available
when the current result set is updateable. If no key columns could be retrieved for the current result, you can manually
define the key columns to be used, using Data Define key columns
If you do not want to copy all columns to the clipboard, hold down the the CTRL key while selecting one
of the menu items related to the clipboard. A dialog will then let you select the columns that you want to
copy.
Alternatively you can hold down the Alt key while selecting rows/columns in the result set. This will allow you to
select only the columns and rows that you want to copy. If you then use one of the formats available in the Copy
selected submenu, only the selected cells will be copied. If you choose to copy the data as UPDATE or DELETE/
INSERT statements, the generated SQL statements will not be correct if you did not select the primary key of the
underlying update table.
Description
Header
if this option this is checked, the first line of the import file will be ignored
Delimiter
the delimiter used to separate column values. Enter \t for the tab character
Date Format
Decimal char
The character that is used to indicate the decimals in numeric values (typically a
dot or a comma)
Quote char
The character used to quote values with special characters. Make sure that each
opening quote is followed by a closing quote in your text file.
You can also import text and XML files using the WbImport command. Using the WbImport command is the
recommended way to import data, as it is much more flexible, and - more important - it does not read the data into
memory.
59
If no column name matches (i.e. no header row is present) but the number of columns (identified by the number of
tab characters in the first row) is identical to the number of columns in the current result.
If SQL Workbench/J cannot identify the format of the clipboard a dialog will be opened where you can specify the
format of the clipboard contents. This is mainly necessary if the delimiter is not the tab character. You can manually
open that dialog, by holding down the Ctrl key when clicking on the menu item.
60
select *
from address
where person_id = $[id];
and the following SQL query:
61
The context menu of the result will then contain a new submenu: Macros Person Address. The variables $[id],
$[firstname] and $[lastname] will contain the values of the currently selected row when the macro is
executed.
It is also possible to re-map the column names to different variable names.
62
SELECT *
FROM person;
In addition to a row number, the special values end or last (without a #) are also recognized. When they are
supplied, the result is automatically scrolled to the last row. This is useful when displaying the contents of log tables.
-- @WbScrollTo end
SELECT *
FROM activity_log;
63
64
Note that the macro name needs to be unique to be used as a "SQL Statement". If you have two different macros in two
different macro groups with the same name, it is undefined (i.e. "random") which of them will be executed.
To view the complete list of macros select Macros Manage Macros... After selecting a macro, it can be executed
by clicking on the Run Run button. If you check the option "Replace current SQL", then the text in the editor will be
replaced with the text from the macro when you click on the run button.
In console mode you can use the command WbListMacros to show the complete list of macros (of course this can
also be used in GUI mode as well.
Macros will not be evaluated when running in batch mode.
Apart from the SQL Workbench/J script variables for SQL Statements, additional "parameters" can be used inside a
macro definition. These parameters will be replaced before replacing the script variables.
Parameter
Description
${selection}$
This parameter will be replaced with the currently selected text. The selected text will not
be altered in any way.
${selected_statement}$
This behaves similar to ${selection}$ except that any trailing semicolon will be
removed from the selection. Thus the macro definition can always contain the semicolon
(e.g. when the macro actually defines a script with multiple statements) but when selecting
the text, you do not need to worry whether a semicolon is selected or not (and would
potentially break the script).
${current_statement}$
This key will be replaced with the current statement (without the trailing delimiter). The
current statement is defined by the cursor location and is the statement that would be
executed when using SQL Execute current
${text}$
This key will be replaced with the complete text from the editor (regardless of any
selection).
The SQL statement that is eventually executed will be logged into the message panel when invoking the macro from the
menu. Macros that use the above parameters cannot correctly be executed by entering the macro alias in the SQL editor
(and then executing the "statement").
The parameter keywords are case sensitive, i.e. the text ${SELECTION}$ will not be replaced!
This feature can be used to create SQL scripts that work only with with an additional statement. e.g. for Oracle you
could define a macro to run an explain plan for the current statement:
explain plan
for
${current_statement}$
;
-- @wbResult Execution plan
select plan_table_output
from table(dbms_xplan.display(format => 'ALL'));
When you run this macro, it will run an EXPLAIN PLAN for the statement in which the cursor is currently located,
and will immediately display the results for the explain. Note that the ${current_statement}$ keyword is
terminated with a semicolon, as the replacement for ${current_statement}$ will never add the semicolon. If
you use ${selection}$ instead, you have to pay attention to not select the semicolon in the editor before running
this macro.
65
For PostgreSQL you can define a similar macro that will automatically run the EXPLAIN command for a statemet:
explain (analyze true, verbose true, buffers true) ${current_statement}$;
Another usage of the parameter replacement could be a SQL Statement that retrieves the rowcount that would be
returned by the current statement:
SELECT count(*) FROM
(
${current_statement}$
)
Description
${c}
This parameter marks the location of the cursor after the macro is expanded.
${s}
This parameter also marks the position of the cursor after expansion. Additionally the word
on the right hand side of the parameter will automatically be selected.
66
67
By default the dialog will not load more than 150 rows from that table. The number of retrieved rows can be configured
through the "Max. Rows" input field.
There are two ways to find the desired target row which can be selected using the radio buttons above the input field.
Applying a filter
This mode is intended for small lookup tables. All rows are loaded into memory and the rows are filtered
immediately when typing. The typed value is searched in all columns of the result set. Clicking on the reload button
will always re-retrieve all rows.
Retrieving data
This mode is intended for large tables where not all rows can be loaded into memory. After entering a search term
and hitting the ENTER key (or clicking on the reload button), a SQL statement to retrieve the rows containing the
search statement will be executed. The returned rows are then displayed.
Once you have selected the desired row, clicking the OK will put the value(s) of the corresponding primary key
column(s) into the currently edited row.
68
When invoking code-completion inside a DML (UPDATE, DELETE, INSERT, SELECT) statement, an additional
entry (Select FK value) is available in the popup if the cursor is located inside the value assignment or
condition, e.g. in the following example:
update film_category
set category_id = |
where film_id = 42;
(the | denoting the location of the cursor).
When that menu item is selected, the statement is analyzed and if the column of the current expression is a foreign key
to a different table, the lookup dialog will appear and will let you select the appropriate PK value from the referenced
table.
Foreign key lookup for DML statement is currently only supported for single column primary keys.
You can also generate a script to delete the selected and all depending rows through Data Generate delete script. This
will not remove any rows from the current result set, but instead create and display a script that you can run at a later
time.
If you want to generate a SQL script to delete all dependent rows, you can also use the SQL Workbench/J command
WbGenerateDelete.
69
70
If you want to use the modes update/insert or insert/update for WbImport, you should also add the
property:
workbench.db.postgresql.import.usesavepoint=true
to enable the usage of savepoints during imports. This setting also affects the WbCopy command.
This is not necessary when the using the mode upsert or insertIgnore with Postgres 9.5
You can also use the parameter -useSavepoint for the WbImport and WbCopy commands to control the use of
savepoints for each import.
Using savepoints can slow down the import substantially.
71
By default a regular user does not have SELECT privilege on V$TRANSACTION, please grant the
privilege before enabling this feature.
The checking for un-committed changes can be controlled through the connection profile.
Examples
Show statistics without retrieving the actual data:
set autotrace traceonly statistics
Retrieve the data and show statistics
set autotrace on statistics
Display the statistics and the execution plan but do not retrieve the data
set autotrace traceonly explain statistics
Display the statistics and the actual execution plan but do not retrieve the data
set autotrace traceonly realplan statistics
72
SHOW option
Description
ERRORS
PARAMETERS
SGA
SGAINFO
RECYCLEBIN
USER
AUTOCOMMIT
LOGSOURCE
EDITION
CON_ID
PDBS
73
74
75
When executing the statement, SQL Workbench/J only retrieves the first row of the result set. Subsequent rows are
ignored. If the select returns more columns than variable names, the additional values are ignored. If more variables are
listed than columns are present in the result set, the additional variables will be undefined.
WbVarDef person_id=42;
WbVarDef -variable=my_select -contentFile=select.txt;
$[my_select];
After running the above script, the variable my_select, will have the value of SELECT * FROM person WHERE
id = 42. When "running" $[my_select], the row with id=42 will be retrieved.
76
If you are using values in your regular statements that actually need the prefix ($[ or suffix ]) characters, please make
sure that you have no variables defined. Otherwise you will unpredictable results. If you want to use variables but
need to use the default prefix for marking variables in your statements, you can configure a different prefix and suffix
for flagging variables. To change the the prefix e.g. to %# and the suffix (i.e end of the variable name) to #, add the
following lines to your workbench.settings file:
workbench.sql.parameter.prefix=%#
workbench.sql.parameter.suffix=#
You may leave the suffix empty, but the prefix definition may not be empty.
WbVarDef zzz='';
WbVarDef vvv='';
WbVarDef aaa='';
select *
77
from foobar
where col1 = $[?aaa]
and col2 = $[?vvv]
and col3 > $[?zzz]
The dialog to enter the variables will show them in the order zzz, vvv, aaa
78
79
80
81
82
When running SQL Workbench/J in batch mode, with no workbench.settings file, you can set any property by
passing the property as a system property when starting the JVM. To change the loglevel to DEBUG you need to pass Dworkbench.log.level=DEBUG when starting the application:
java -Dworkbench.log.level=DEBUG -jar sqlworkbench.jar
18.16. Examples
For readability the examples in this section are displayed on several lines. If you enter them manually
on the command line you will need to put everything in one line, or use the escape character for your
operating system to extend a single command over more then one input line.
Connect to the database without specifying a connection profile:
java -jar sqlworkbench.jar -url=jdbc:postgresql:/dbserver/mydb
-driver=org.postgresql.Driver
-username=zaphod
-password=vogsphere
-driverjar=C:/Programme/pgsql/pg73jdbc3.jar
-script='test-script.sql'
This will start SQL Workbench/J, connect to the database server as specified in the connection parameters and execute
the script test-script.sql. As the script's filename contains a dash, it has to be quoted. This is also necessary
when the filename contains spaces.
Executing several scripts with a cleanup and failure script:
java -jar sqlworkbench.jar
-script='c:/scripts/script-1.sql','c:/scripts/script-2.sql',c:/scripts/script3.sql
-profile=PostgreSQL
-abortOnError=false
-cleanupSuccess=commit.sql
-cleanupError=rollback.sql
Note that you need to quote each file individually (where it's needed) and not the value for the -script parameter
Run a SQL command in batch mode without using a script file
The following example exports the table "person" without using the -script parameter:
java -jar sqlworkbench.jar
-profile='TestData'
-command='WbExport -file=person.txt -type=text -sourceTable=person'
The following example shows how to run two different SQL statements without using the -script parameter:
java -jar sqlworkbench.jar
-profile='TestData'
-command='delete from person; commit;'
83
84
85
comment
---- [Row
id
firstname
lastname
comment
:
4] ------------------------------: 3
: Tricia
: McMillian
:
(4 Rows)
SQL>
To switch back to the "tabular" display, use: WbDisplay tab.
86
87
If no parameter switch is given, everything after the keyword WbDeleteProfile will be assumed to be the profile's
name. By default the password is not saved.
Alternatively the command supports the parameters name and savePassword. If you want to store the password in
the profile, the version using parameters must be used:
SQL> WbStoreProfile -name="{MyGroup}/DevelopmentServer" -savePassword=true
Profile '{MyGroup}/DevelopmentServer' added
SQL>
If the current connection references a JDBC driver that is not already defined, a new entry for the driver defintions is
created referencing the library that was passed on the commandline.
All profiles are automatically saved after executing WbStoreProfile.
\q
\s
\i
\d
\l
\dn
\dt
\df
\sf
\g
\!
Even though those commands look like the psql commands, they don't work exactly like them. Most importantly
they don't accept the parameters that psql supports. Parameters need to be passed as if the regular SQL Workbench/J
command had been used.
88
The WbExport command exports either the result of the next SQL Statement (which has to produce a result set) or the
content of the table(s) specified with the -sourceTable parameter. The data is directly written to the output file and
not loaded into memory. The export file(s) can be compressed ("zipped") on the fly. WbImport can import the zipped
(text or XML) files directly without the need to unzip them.
If you want to save the data that is currently displayed in the result area into an external file, please use the Save Data as
feature. You can also use the Database Explorer to export multiple tables.
When using a SELECT based export, you have to run both statements (WbExport and SELECT) as one
script. Either select both statements in the editor and choose SQL Execute selected, or make the two
statements the only statements in the editor and choose SQL Execute all.
You can also export the result of a SELECT statement, by selecting the statement in the editor, and then choose SQL
Export query result.
When exporting data into a Text or XML file, the content of BLOB columns is written into separate files. One file
for each column of each row. Text files that are created this way can most probably only be imported using SQL
Workbench/J as the main file will contain the filename of the BLOB data file instead of the actual BLOB data. The
only other application that I know of, that can handle this type of imports is Oracle's SQL*Loader utility. If you run
the text export together with the parameter -formatFile=oracle a control file will be created that contains the
appropriate definitions to read the BLOB data from the external file.
Oracles's BFILE, PostgreSQL's large object and SQL Server's filestream types are not real
BLOB datatypes (from a JDBC point of view) and are currently not exported by WbExport. Only
columns that are reported as BLOB, BINARY, VARBINARY or LONGVARBINARY in the column "JDBC
type" in the DbExplorer will be exported correctly into a separate file.
89
Description
xlsm
xls
xlsx
For a comparison of the different Microsoft Office XML formats please refer to: http://en.wikipedia.org/wiki/
Microsoft_Office_XML_formats
You can download all required POI libraries as a single archive from the SQL Workbench/J home page: http://
www.sql-workbench.net/poi-add-on3.zip. After downloading the archive, unzip it into the directory where
sqlworkbench.jar is located.
To write the file formats XLS and XLSX the entire file needs to be built in memory. When exporting
results with a large number of rows this will require a substantial amount of memory.
Description
-type
90
Parameter
Description
sqlupdate will generate UPDATE statements that update all non-key columns of
the table. This will only generate valid UPDATE statements if at least one key column
is present. If the table does not have key columns defined, or you want to use different
columns, they can be specified using the -keycolumns switch.
ods will generate a spreadsheet file in the OpenDocument format that can be opened e.g.
with OpenOffice.org.
xlsm will generate a spreadsheet file in the Microsoft Excel 2003 XML format ("XML
Spreadsheet"). When using Microsoft Office 2010, this export format should should be
saved with a file extension of .xml in order to be identified correctly.
xls will generate a spreadsheet file in the proprietary (binary) format for Microsoft Excel
(97-2003). The file poi.jar is required.
xlsx will generate a spreadsheet file in the default format introduced with Microsoft
Office 2007. Additional external libraries are required in order to be able to use this
format. Please read the note at the beginning of this section.
This parameter supports auto-completion.
-file
-createDir
If this parameter is set to true, SQL Workbench/J will create any needed directories
when creating the output file.
-sourceTable
-schema
Define the schema in which the table(s) specified with -sourceTable are located.
This parameter only accepts a single schema name. If you want to export tables from
more than one schema, you need to fully qualify them as shown in the description of the sourceTable parameter.
This parameter supports auto-completion.
-types
Selects the object types to be exported. By default only TABLEs are exported. If you want
to export the content of VIEWs or SYNONYMs as well, you have to specify all types with
this parameter.
-sourceTable=* -types=VIEW,SYNONYM or -sourceTable=T% types=TABLE,VIEW,SYNONYM
91
Parameter
Description
This parameter supports auto-completion.
-excludeTables
The tables listed in this parameter will not be exported. This can be used when all but
a few tables should be exported from a schema. First all tables specified through sourceTable will be evaluated. The tables specified by -excludeTables can include
wildcards in the same way, -sourceTable allows wildcards.
-sourceTable=* -excludeTables=TEMP* will export all tables, but not those
starting with TEMP.
This parameter supports auto-completion.
-sourceTablePrefix
Define a common prefix for all tables listed with -sourceTable. When this parameter
is specified the existence of each table is not tested any longer (as it is normally done).
When this parameter is specified the generated statement for exporting the table is
changed to a SELECT * FROM [prefix]tableName instead of listing all columns
individually.
This can be used when exporting views on tables, when for each table e.g. a view with a
certain prefix exists (e.g. table PERSON has the view V_PERSON and the view does some
filtering of the data.
This parameter can not be used to select tables from a specific schema. The prefix will be
prepended to the table's name.
-outputDir
When using the -sourceTable switch with multiple tables, this parameter is
mandatory and defines the directory where the generated files should be stored.
-continueOnError
When exporting more than one table, this parameter controls whether the whole export
will be terminated if an error occurs during export of one of the tables.
-encoding
Defines the encoding in which the file should be written. Common encodings are
ISO-8859-1, ISO-8859-15, UTF-8 (or UTF8). To get a list of available encodings, execut
WbExport with the parameter -showencoding. This parameter is ignored for XLS,
XLSX and ODS exports.
This parameter supports auto-completion and if it is invoked for this parameter,
it will show a list of encodings defined through the configuration property
workbench.export.defaultencodings This is a comma-separated list that can
be changed using WbSetConfig
-showEncodings
Displays the encodings supported by your Java version and operating system. If this
parameter is present, all other parameters are ignored.
-lineEnding
-header
92
Parameter
Description
If this parameter is set to true, the header (i.e. the column names) are placed into the first
line of output file. The default is to not create a header line. You can define the default
value for this parameter in the file workbench.settings. This parameter is valid for text and
spreadsheet (OpenDocument, Excel) exports.
-compress
Selects whether the output file should be compressed and put into a ZIP archive. An
archive will be created with the name of the specified output file but with the extension
zip. The archive will then contain the specified file (e.g. if you specify data.txt,
an archive data.zip will be created containing exactly one entry with the name
data.txt). If the exported result set contains BLOBs, they will be stored in a separate
archive, named data_lobs.zip.
When exporting multiple tables using the -sourcetable parameter, then SQL
Workbench/J will create one ZIP archive for each table in the specified output directory
with the filename "tablename".zip. For any table containing BLOB data, one
additional ZIP archive is created.
-tableWhere
Defines an additional WHERE clause that is appended to all SELECT queries to retrieve
the rows from the database. No validation check will be done for the syntax or the
columns in the where clause. If the specified condition is not valid for all exported tables,
the export will fail.
-clobAsFile
All CLOB files that are written using the encoding specified with the -encoding switch.
If the -encoding parameter is not specified the default file encoding will be used.
-lobIdCols
When exporting CLOB or BLOB columns as external files, the filename with the LOB
content is generated using the row and column number for the currently exported LOB
column (e.g. data_r15_c4.data). If you prefer to have the value of a unique column
combination as part of the file name, you can specify those columns using the lobIdCols parameter. The filename for the LOB will then be generated using the
base name of the export file, the column name of the LOB column and the values of
the specified columns. If you export your data into a file called user_info and specify lobIdCols=id and your result contains a column called img, the LOB files will be
named e.g. user_info_img_344.data
93
Parameter
Description
-lobsPerDirectory
When exporting CLOB or BLOB columns as external files, the generated files can be
distributed over several directories to avoid an excessive number of files in a single
directory. The parameter lobsPerDirectory defines how many LOB files are written
into a single directory. When the specified number of files have been written, a new
directory is created. The directories are always created as a sub-directory of the target
directory. The name for each directory is the base export filename plus "_lobs" plus a
running number. So if you export the data into a file "the_big_table.txt", the LOB files
will be stored in "the_big_table_lobs_1", "the_big_table_lobs_2", "the_big_table_lobs_3"
and so on.
The directories will be created if needed, but if the directories already exist (e.g. because
of a previous export) their contents will not be deleted!
-extensionColumn
When exporting CLOB or BLOB columns as external files, the extension of the generated
filenames can be defined based on a column of the result set. If the exported table
contains more than one type of BLOBs (e.g. JPEG, GIF, PDF) and your table stores the
information to define the extension based on the contents, this can be used to re-generate
proper filenames.
This parameter only makes sense if exactly one BLOB column of a table is exported.
-filenameColumn
When exporting CLOB or BLOB columns as external files, the complete filename can
be taken from a column of the result set (instead of dynamically creating a new file name
based on the row and column numbers).
This parameter only makes sense if exactly one BLOB column of a table is exported.
-append
-dateFormat
The date format to be used when writing date columns into the output file. This parameter
is ignored for SQL exports.
-timestampFormat
The format to be used when writing datetime (or timestamp) columns into the output file.
This parameter is ignored for SQL exports.
-blobType
94
Parameter
Description
Two additional SQL literal formats are available that can be used together with
PostgreSQL: pgDecode and pgEscape. pgDecode will generate a hex representation
using PostgreSQL's decode() function. Using decode is a very compact format.
pgEscape will use PostgreSQL's escaped octets, and generates much bigger statements
(due to the increase escaping overhead).
When using file, base64 or ansi the file can be imported using WbImport
The parameter value file, will cause SQL Workbench/J to write the contents of each
blob column into a separate file. The SQL statement will contain the SQL Workbench/
J specific extension to read the blob data from the file. For details please refer to BLOB
support. If you are planning to run the generated SQL scripts using SQL Workbench/J this
is the recommended format.
Note that SQL scripts generated with -blobType=file can only be used
with SQL Workbench/J
The parameter value ansi, will generate "binary strings" that are compatible with the
ANSI definition for binary data. MySQL and Microsoft SQL Server support these kind of
literals.
The parameter value dbms, will create a DBMS specific "binary string". MySQL,
HSQLDB, H2 and PostgreSQL are known to support literals for binary data. For other
DBMS using this option will still create an ANSI literal but this might result in an invalid
SQL statement.
This parameter supports auto-completion.
-replaceExpression replaceWith
Using these parameters, arbitrary text can be replaced during the export. replaceExpression defines the regular expression that is to be replaced. replaceWith defines the replacement value. -replaceExpression='(\n|\r
\n)' -replaceWith=' ' will replace all newline characters with a blank.
The search and replace is done on the "raw" data retrieved from the database before the
values are converted to the corresponding output format. In particular this means replacing
is done before any character escaping takes place.
Because the search and replace is done before the data is converted to the output format, it
can be used for all export types (text, xml, Excel, ...).
Only character columns (CHAR, VARCHAR, CLOB, LONGVARCHAR) are taken into
account.
-trimCharData
-showProgress
95
Parameter
Description
-delimiter
The given string sequence will be placed between two columns. The default is a tab
character (-delimiter=\t
-rowNumberColumn
If this parameter is specified with a value, the value defines the name of an additional
column that will contain the row number. The row number will always be exported as the
first column. If the text file is not created with a header (-header=false) a value must
still be provided to enable the creation of the additional column.
-quoteChar
The character (or sequence of characters) to be used to enclose text (character) data if the
delimiter is contained in the data. By default quoting is disabled until a quote character
is defined. To set the double quote as the quote character you have to enclose it in single
quotes: -quotechar='"'
-quoteCharEscaping
-quoteAlways
-decimal
The decimal symbol to be used for numbers. The default is a dot e.g. the number Pi would
be written as 3.14152 When using -decimal=',' the number Pi would be written as:
3,14152
-maxDigits
Defines a maximum number of decimal digits. If this parameter is not specified decimal
values are exported according to the global formatting settings
Specifying a value of 0 (zero) results in exporting as many digits as avialable.
-fixedDigits
Defines a fixed number of decimal digits. If this parameter is not specified decimal values
are exported according to the -maxDigits parameter (or the global default).
If this parameter is specified, all decimal values are exported with the defined number of
digits. If -fixedDigits=4 is used, the value 1.2 to be written as 1.2000.
96
Parameter
Description
This parameter is ignored if -maxDigits is also provided.
-escapeText
-nullString
Defines the string value that should be written into the output file for a NULL value. This
value will be enclosed with the specified quote character only if -quoteAlways=true
is specified as well.
-formatFile
97
Parameter
Description
-table
The given tablename will be put into the <table> tag as an attribute.
-decimal
The decimal symbol to be used for numbers. The default is a dot (e.g. 3.14152)
-useCDATA
-xsltParameter
A list of parameters (key/value pairs) that should be passed to the XSLT processor.
When using e.g. the wbreport2liquibase.xslt stylesheet, the value of the
author attribute can be set using -xsltParameter="authorName=42".
This parameter can be provided multiple times for multiple parameters, e.g. when
using wbreport2pg.xslt: -xsltParameter="makeLowerCase=42" xsltParameter="useJdbcTypes=true"
-stylesheet
The name of the XSLT stylesheet that should be used to transform the SQL Workbench/
J specific XML file into a different format. If -stylesheet is specified, -xsltoutput has to be
specified as well.
-xsltOutput
This parameter defines the output file for the XSLT transformation specified through the styleSheet parameter
-verboseXML
Parameter
Description
-table
Define the tablename to be used for the UPDATE or INSERT statements. This parameter
is required if the SELECT statement has multiple tables in the FROM list. table.
-charfunc
98
Parameter
Description
INSERT INTO ... VALUES ('First line'||chr(13)||'Second
line' ... )
This setting will affect ASCII values from 0 to 31
-concat
If the parameter -charfunc is used SQL Workbench/J will concatenate the individual
pieces using the ANSI SQL operator for string concatenation. In case your DBMS does
not support the ANSI standard (e.g. MS ACCESS) you can specify the operator to be
used: -concat=+ defines the plus sign as the concatenation operator.
-sqlDateLiterals
-commitEvery
A numeric value which identifies the number of INSERT or UPDATE statements after
which a COMMIT is put into the generated SQL script.
-commitEvery=100
will create a COMMIT; after every 100th statement.
If this is not specified one COMMIT; will be added at the end of the script. To
suppress the final COMMIT, you can use -commitEvery=none. Passing commitEvery=atEnd is equivalent to -commitEvery=0
-createTable
99
Parameter
Description
If this parameter is set to true, the necessary CREATE TABLE command is put into the
output file. This parameter is ignored when creating UPDATE statements.
Note that this will only create the table including its primary key. This will not create
other constraints (such as foreign key or unique constraints) nor will it create indexes on
the target table.
-useSchema
-keyColumns
A comma separated list of column names that occur in the table or result set that should be
used as the key columns for UPDATE or DELETE
If the table does not have key columns, or the source SELECT statement uses a join over
several tables, or you do not want to use the key columns defined in the database, this key
can be used to define the key columns to be used for the UPDATE statements. This key
overrides any key columns defined on the base table of the SELECT statement.
-includeAutoIncColumns
Description
-title
-infoSheet
-fixedHeader
-autoFilter
100
Parameter
Description
If set to true, the "auto-filter" fetaure for the column headers will be turned on.
-autoColWidth
-targetSheet targetSheetName
Possible values: any valid index or name for a worksheet in an existing Excel file
This parameter is only available for XLS and XLSX exports
When using this parameter, the data will be written into an existing file and worksheet
without changing the formatting in the spreadsheet. No formatting is applied as it is
assumed that the target worksheet is properly set up.
The parameters -autoFilter, -fixedHeader and -autoColWidth
can still be used. If -targetSheet or -targetSheetName are
specified they default to false unless they are explicitely passed as true.
If the parameters -dateFormat or -timestampFormat are specified
together with a target sheet, the format for date/timestamp columns in the
Excel sheet will be overwritten. To overwrite the format in the Excel sheet,
those parameters must be specified explicitely.
If this parameter is used, the target file specified with the -file parameter must already
exist
If -targetSheet is supplied, the value for -targetSheetName is ignored
These parameters support auto-completion if the -file parameter is already supplied.
-offset
Parameter
Description
-createFullHTML
-escapeHTML
101
Parameter
Description
If this is set to true, values inside the data will be escaped (e.g. the < sign will be written
as <) so that they are rendered properly in an HTML page. If your data contains HTML
tag that should be saved as HTML tags to the output, this parameter must be false.
-title
The title for the HTML page (put into the <title> tag of the generated output)
-preDataHtml
With this parameter you can specify a HTML chunk that will be added before the export
data is written to the output file. This can be used to e.g. create a heading for the data: preDataHtml='<h1>List of products</h1>'.
The value will be written to the output file "as is". Any escaping of the HTML must be
provided in the parameter value.
-postDataHtml
With this parameter you can specify a HTML chunk that will be added after the data has
been written to the output file.
Description
-nullString
Defines the string value that should be written into the output file for a NULL value.
20.11. Examples
20.11.1. Simple plain text export
WbExport -type=text
-file='c:/data/data.txt'
-delimiter='|'
102
-decimal=','
-sourcetable=data_table;
Will create a text file with the data from data_table. Each column will be separated with the character | Each
fractional number will be written with a comma as the decimal separator.
103
This will create one zip file for each table containing the exported data as a text file. If a table contains BLOB columns,
the blob data will be written into a separate zip file.
The files created by the above statement can be imported into another database using the following command:
WbImport -type=text
-sourceDir=c:/data/export
-extension=zip
-checkDependencies=true;
104
This examples assumes that the following columns are part of the table blob_table: id_column, some_name and
type_column. The filenames for the blob of each row will be taken from the computed column fname. To be able
to reference the column in the WbExport you must give it an alias.
This approach assumes that only a single blob column is exported. When exporting multiple blob columns from a single
table, it's only possible to create unique filenames using the row and column number (the default behaviour).
105
Description
-type
106
Parameter
Description
-mode
Defines how the data should be sent to the database. Possible values are 'insert',
'update', 'insert,update' and 'update,insert' For details please refer to the
update mode explanation.
For some DBMS, the additional modes: 'upsert' and 'insertIgnore' are supported.
For details please refer to the native upsert and native insertIgnore explanation.
-file
Defines the full name of the input file. Alternatively you can also specify a directory
(using -sourcedir) from which all files are imported.
-table
-sourceDir
Defines a directory which contains import files. All files from that directory will be
imported. If this switch is used with text files and no target table is specified, then it is
assumed that each filename (without the extension) defines the target table. If a target
table is specified using the -table parameter, then all files will be imported into the
same table. The -deleteTarget will be ignored if multiple files are imported into a
single table.
-extension
When using the -sourcedir switch, the extension for the files can be defined. All
files ending with the supplied value will be processed. (e.g. -extension=csv). The
extension given is case-sensitive (i.e. TXT is something different than txt
-ignoreOwner
If the file names imported with from the directory specified with -sourceDir contain the
owner (schema) information, this owner (schema) information can be ignored using this
parameter. Otherwise the files might be imported into a wrong schema, or the target tables
will not be found.
-excludeFiles
Using -excludeFiles, files from the source directory (when using -sourceDir) can be
excluded from the import. The value for this parameter is a comma separated list of partial
names. Each file that contains at least one of the values supplied in this parameter is
ignored. -excludeFiles=back,data will exclude any file that contains the value
back or data in it, e.g.: backup, to_back, log_data_store etc.
-checkDependencies
When importing more than one file (using the -sourcedir switch), into tables with
foreign key constraints, this switch can be used to import the files in the correct order
(child tables first). When -checkDependencies=true is passed, SQL Workbench/
J will check the foreign key dependencies for all tables. Note that this will not check
dependencies in the data. This means that e.g. the data for a self-referencing table (parent/
child) will not be order so that it can be imported. To import self-referencing tables, the
foreign key constraint should be set to "initially deferred" in order to postpone evaluation
of the constraint until commit time.
-commitEvery
If your DBMS neeeds frequent commits to improve performance and reduce locking on
the import table you can control the number of rows after which a COMMIT is sent to the
server.
-commitEveryis numeric value that defines the number of rows after which a COMMIT
is sent to the DBMS. If this parameter is not passed (or a value of zero or lower), then the
import is run as a single transaction that is committed at the end.
When using batch import and your DBMS requires frequent commits to improve import
performance, the -commitBatch option should be used instead.
You can turn off the use of a commit or rollback during import completely by using the
option -transactionControl=false.
107
Parameter
Description
Using -commitEvery means, that in case of an error the already imported rows cannot
be rolled back, leaving the data in a potential invalid state.
-transactionControl
-continueOnError
-emptyFile
-useSavepoint
-keyColumns
Defines the key columns for the target table. This parameter is only necessary if import is
running in UPDATE mode.
It is assumed that the values for the key columns will never be NULL.
This parameter is ignored if files are imported using the -sourcedir parameter.
-ignoreIdentitiyColumns
108
Parameter
Description
-schema
Defines the schema into which the data should be imported. This is necessary for DBMS
that support schemas, and you want to import the data into a different schema, then the
current one.
-encoding
Defines the encoding of the input file (and possible CLOB files)
If auto-completion is invoked for this parameter, it will show a list of encodings defined
through the configuration property workbench.export.defaultencodings This
is a comma-separated list that can be changed using WbSetConfig
-deleteTarget
-truncateTable
-batchSize
A numeric value that defines the size of the batch queue. Any value greater than 1
will enable batch mode. If the JDBC driver supports this, the INSERT (or UPDATE)
performance can be increased drastically.
This parameter will be ignored if the driver does not support batch updates or if
the mode is not UPDATE or INSERT (i.e. if -mode=update,insert or mode=insert,update is used).
-commitBatch
-updateWhere
When using update mode an additional WHERE clause can be specified to limit the rows
that are updated. The value of the -updatewhere parameter will be added to the
generated UPDATE statement. If the value starts with the keyword AND or OR the value
will be added without further changes, otherwise the value will be added as an AND clause
enclosed in brackets. This parameter will be ignored if update mode is not active.
-startRow
A numeric value to define the first row to be imported. Any row before the specified row
will be ignored. The header row is not counted to determine the row number. For a text
file with a header row, the physical line 2 is row 1 (one) for this parameter.
When importing text files, empty lines in the input file are silently ignored and do not add
to the count of rows for this parameter. So if your input file has two lines to be ignored,
then one empty line and then another line to be ignored, startRow must be set to 4.
109
Parameter
Description
-endRow
A numeric value to define the last row to be imported. The import will be stopped after
this row has been imported. When you specify -startRow=10 and -endRow=20 11
rows will be imported (i.e. rows 10 to 20). If this is a text file import with a header row,
this would correspond to the physical lines 11 to 21 in the input file as the header row is
not counted.
-columnFilter
This defines a filter on column level that selects only certain rows from
the input file to be sent to the database. The filter has to be defined as
column1="regex",column2="regex". Only Rows matching all of the supplied
regular expressions will be included by the import.
This parameter is ignored when the -sourcedir parameter is used.
-badFile
-maxLength
With the parameter -maxLength you can truncate data for character columns
(VARCHAR, CHAR) during import. This can be used to import data into columns that are
not big enough (e.g. VARCHAR columns) to hold all values from the input file and to
ensure the import can finish without errors.
The parameter defines the maximum length for certain columns using the following
format: -maxLength='firstname=30,lastname=20' Where firstname and
lastname are columns from the target table. The above example will limit the values
for the column firstname to 30 characters and the values for the column lastname to 20
characters. If a non-character column is specified this is ignored. Note that you have quote
the parameter's value in order to be able to use the "embedded" equals sign.
-booleanToNumber
-numericFalse numericTrue
110
Parameter
Description
To use -1 for false and 1 for true, use the following parameters: numericFalse='-1' -numericTrue='1'. Note that '-1' must be quoted due
to the dash. If these parameters are used, -booleanToNumber will be assumed true
implicitely.
These parameters can be combined with -literalsFalse and -listeralsTrue.
Please note:
This conversion is only applied for "text" input values. Valid numbers in the input
file will not be converted to the values specified with -numericFalse or numericTrue. This means that you cannot change a 0 (zero) in the input file into a
-1 in the target column.
This parameter is ignored for spreadsheet imports
-literalsFalse -literalsTrue These parameters control the conversion of boolean literals into boolean values.
These two switches define the text values that represent the (boolean) values false and
true in the input file. This conversion is applied when storing the data in a column that is
of type boolean in the database.
The value to these switches is a comma separated list of literals that should
be treated as the specified value, e.g.: -literalsFalse='false,0' literalsTrue='true,1' will define the most commonly used values for true/false.
Please note:
The definition of the literals is case sensitive!
You always have to specify both switches, otherwise the definition will be ignored
This parameter is ignored for spreadsheet imports
-constantValues
With this parameter you can supply constant values for one or more columns that will be
used when inserting new rows into the database.
The constant values will only be used when inserting rows (e.g. using -mode=insert)
The format of the values is constantValues="column1=value1,column2=value2".
The parameter can be repeated multiple times, to make quoting
easier: -constantValues="column1=value1" constantValues="column2=value2" The values will be converted by the same
rules as the input values from the input file. If the value for a character column is enclosed
in single quotes, these will be removed from the value before sending it to the database.
To include single quotes at the start or end of the input value you need to use two single
quotes, e.g.-constantValues="name=''Quoted'',title='with space'"
For the field name the value 'Quoted' will be sent to the database. for the field title
the value with space will be sent to the database.
To specify a function call to be executed, enclose the function call in ${...}, e.g.
${mysequence.nextval} or ${myfunc()}. The supplied function will be put into
the VALUES part of the INSERT statement without further checking (after removing the
${ and } characters, of course). So make sure that the syntax is valid for your DBMS. If
you do need to store a literal like ${some.value} into the database, you need to quote
it: -constantValues="varname='${some.value}'".
111
Parameter
Description
You can also specify a SELECT statement that retrieves information from the database
based on values from the input file. This is useful when the input file contains e.g. values
from a lookup table (but not the primary key from the lookup table).
The syntax to specify a SELECT statement is similar to a function call: constantValues="$@{SELECT type_id FROM type_definition WHERE
type_name = $4" where $4 references the fourth column from the input file. The first
column is $1 (not $0).
The parameter for the SELECT statement do not need to be quoted as internally a
prepared statement is used. However the values in the input file must be convertible by the
JDBC driver.
In addition to the function call or SELECT statements, WbImport provides three variables
that can be used to access the name of the currently imported file. This can be used to
store the source file of the data in the target table.
The following three variables are supported
_wb_import_file_path this contains the full path and file name of the current
import file
_wb_import_file_name this contains only the file name (without the path)
_wb_import_file_dir this contains only the path of the file without the filename
(and without the extension)
Please refer to the examples for more details on the usage.
-insertSQL
-preTableStatement postTableStatement
This parameter defines a SQL statement that should be executed before the import
process starts inserting data into the target table. The name of the current table (when e.g.
importing a whole directory) can be referenced using ${table.name}.
To define a statement that should be executed after all rows have been inserted and have
been committed, you can use the -postTableStatement parameter.
112
Parameter
Description
These parameters can e.g. be used to enable identity insert for MS SQL Server:
-preTableStatement="set identity_insert ${table.name} on"
-postTableStatement="set identity_insert ${table.name} off"
Errors resulting from executing these statements will be ignored. If you want to abort
the import in that case you can specify -ignorePrePostErrors=false and continueOnError=false.
These statements are only used if more than one table is processed.
-showProgress
Description
-fileColumns
A comma separated list of the table columns in the import file Each column from the file
should be listed with the appropriate column name from the target table. This parameter
also defines the order in which those columns appear in the file. If the file does not contain
a header line or the header line does not contain the names of the columns in the database
(or has different names), this parameter has to be supplied. If a column from the input file
has no match in the target table, then it should be specified with the name $wb_skip$.
You can also specify the $wb_skip$ flag for columns which are present but that you
want to exclude from the import.
This parameter is ignored when the -sourceDir parameter is used.
-importColumns
Defines the columns that should be imported. If all columns from the input file should
be imported (the default), then this parameter can be ommited. If only certain columns
should be imported then the list of columns can be specified here. The column names
should match the names provided with the -filecolumns switch. The same result can be
achieved by providing the columns that should be excluded as $wb_skip$ columns in
the -filecolumns switch. Which one you choose is mainly a matter of taste. Listing all
columns and excluding some using -importcolumns might be more readable because
the structure of the file is still "visible" in the -filecolumns switch.
This parameter is ignored when the -sourcedir parameter is used.
113
Parameter
Description
-delimiter
Define the character which separates columns in one line. Records are always separated
by newlines (either CR/LF or a single a LF character) unless -multiLine=true is
specified
Default value: \t (a tab character)
-columnWidths
In order to import files that do not have a delimiter but have a fixed width for each
column, this parameters defines the width of each column in the input file. The value
for this parameter is a comma separated list, where each element defines the width in
characters for each column. If this parameter is given, the -delimiter parameter
is ignored. The order of the columns in the input file must still be defined using the fileColumns parameter.
e.g.: -fileColumns=custid,actcode,regioncd,flag columnWidths='custid=10,actcode=5,regioncd=3,flag=1'
Note that the whole list must be enclosed in quotes as the parameter value contains the
equal sign.
If you want to import only certain columns you have to use -fileColumns and importColumns to select the columns to import. You cannot use $wb_skip$ in the fileColumns parameter with a fixed column width import.
-dateFormat
-timestampFormat
The format for datetime (or timestamp) columns in the input file.
-illegalDateIsNull
If this is set to true, illegal dates (such as February, 31st) or malformed dates inside the
input file will be treated as a null value.
-quoteChar
The character which was used to quote values where the delimiter is contained. This
parameter has no default value. Thus if this is not specified, no quote checking will take
place. If you use -multiLine=true you have to specify a quote character in order for
this to work properly.
-quoteAlways
-quoteCharEscaping
114
Parameter
Description
If duplicate is specified, it is expected that the quote character is duplicated in the
input data. This is similar to the handling of single quotes in SQL literals. The input value
here is a "" quote character will be imported as here is a " quote
character
-multiLine
-decimal
-header
-decode
-lineFilter
This defines a filter on the level of the whole input row (rather than for each column
individually). Only rows matching this regular expression will be included in the import.
The complete content of the row from the input file will be used to check the regular
expression. When defining the expression, remember that the (column) delimiter will be
part of the input string of the expression.
-emptyStringIsNull
-nullString
Defines the string value that in the input file to denote a NULL value. The value of this is
case-sensitive, so -nullString=NULL is different to -nullString=null
-trimValues
115
Parameter
Description
Controls whether leading and trailing whitespace are removed from the input
values before they are stored in the database. When used in combination with emptyStringIsNull=true this means that a column value that contains only
whitespace will be stored as NULL in the database.
The default value for this parameter can be controlled in the settings file and it will be
displayed if you run WbImport without any parameters.
Note that, input values for non character columns (such as numbers or date columns) are
always trimmed before converting them to their target datatype.
-blobIsFilename
-blobType
-clobIsFilename
-usePgCopy
This parameter has no value, its presence turns the feature on.
If this parameter is specified, then the input file is sent to the PostgreSQL server using
PostgreSQL's JDBC support for COPY
The specified file(s) must conform to the format expected by PostgreSQL's COPY
command. SQL Workbench/J creates a COPY tablename (column, ...) FROM
stdin WITH (format csv, delimiter '|', header true) statement and
then executes this, passing the actual file contents through the JDBC API.
As COPY does not support "merging" of data, the only allowed import mode is insert.
If a different mode is specified through the -mode parameter, an error will be reported.
116
Parameter
Description
The options defined in the WITH (...) part are influenced by the parameters passed
to WbImport. However COPY does not support all options that WbImport does. To
control the format of the input file(s) only the following parameters are relevant when
using -usePgCopy:
-header
-encoding
-delimiter
Especially the formatting options for dates/timestamps and numbers will have no effect.
So the input file must be formatted properly.
All parameters controlling the target table(s), the columns, the source directory and so on
still work. Including the import directly from a ZIP archive.
117
118
This will import all files with the extension txt located in the directory c:/data/backup into the table person
regardless of the name of the input file. In this mode, the parameter -deleteTarget will be ignored.
WbImport -file=contacts.txt
-type=text
-header=true
-table=contact
-importColumns=contact_id, first_name, last_name
-constantValues="type_id=$@{SELECT type_id FROM contact_type WHERE type_name = $4
For every row from the input file, SQL Workbench/J will run the specified SELECT statement. The value of the first
column of the first row that is returned by the SELECT, will then be used to populate the type_id column. The
SELECT statement will use the value of the third column of the row that is currently being inserted as the value for the
WHERE condition.
You must use the -importColumns parameter as well to make sure the type_name column is not processed! As an
alternative you can also use -fileColumns=contact_id, first_name, last_name, $wb_skip$
instead of -importColumns.
The "placeholders" with the column index must not be quoted (e.g. '$1' for a character column will not
work)!
If the column contact_id should be populated by a sequence, the above statement can be extended to include a
function call to retrieve the sequence value (PostgreSQL syntax:)
119
WbImport
-file=contacts.txt
-type=text
-header=true
-table=contact
-importColumns=first_name, last_name
-constantValues="id=${nextval('contact_id_seq'::regclass)}"
-constantValues="type_id=$@{SELECT type_id FROM contact_type WHERE type_name = $4}"
As the ID column is now populated through a constant expression, it may not appear in the -importColumns list.
Again you could alternatively use -fileColumns=$wb_skip$, first_name, last_name, $wb_skip$
to make sure the columns that are populated through the -constantValue parameter are not taken from the input file.
Description
-verboseXML
-sourceDir
Specify a directory which contains the XML files. All files in that directory ending with
".xml" (lowercase!) will be processed. The table into which the data is imported is read
from the XML file, also the columns to be imported. The parameters -keycolumns, table and -file are ignored if this parameter is specified. If XML files are used that
are generated with a version prior to build 78, then all files need to use either the long
or short tag format and the -verboseXML=false parameter has to be specified if the
short format was used.
When importing several files at once, the files will be imported into the tables specified
in the XML files. You cannot specify a different table (apart from editing the XML file
before starting the import).
-importColumns
Defines the columns that should be imported. If all columns from the input file should be
imported (the default), then this parameter can be omited. When specified, the columns
have to match the column names available in the XML file.
-createTarget
If this parameter is set to true the target table will be created, if it doesn't exist. Valid
values are true or false.
120
-importColumns
-nullString
-emptyStringIsNull
-illegalDateIsNull
The spreadsheet import does not support specifying a date or timestamp format. It is expected that those columns are
formatted in such a way that they can be identified as date or timestamps.
The spreadsheet import also does not support importing BLOB files that are referenced from within the spreadsheet. If
you want to import this kind of data, you need to convert the spreadsheet into a text file.
The spreadsheet import supports one additional parameter that is not available for the text imports:
Parameter
Description
-sheetNumber
Selects the spread sheet inside the file to be imported. If this is not specified the first sheet
is used. The first sheet has the number 1.
All sheets can be imported with a single command when using -sheetNumber=*. In
that case it is assumed that each sheet has the same name as the target table.
If all sheets are imported, the parameters -table, -fileColumns and importColumns are ignored.
-sheetName
Defines the name of the spreedsheet inside the file to be imported. If this is not specified
the first sheet is used.
-stringDates
121
You cannot use the update mode, if the tables in question only consist of key columns (or if only key columns are
specified). The values from the source are used to build up the WHERE clause for the UPDATE statement.
If you specify a combined mode (e.g.: update,insert) and one of the tables involved consists only of key columns,
the import will revert to insert mode. In this case database errors during an INSERT are not considered as real errors
and are silently ignored.
For maximum performance, choose the update strategy that will result in a succssful first statement more often. As a
rule of thumb:
Use -mode=insert,update, if you expect more rows to be inserted then updated.
Use -mode=update,insert, if you expect more rows to be updated then inserted.
To use insert/update or update/insert with PostgreSQL, make sure you have enabled savepoints for the import (which is
enabled by default).
122
123
Description
-sourceProfile
The name of the connection profile to use as the source connection. If -sourceprofile is not
specified, the current connection is used as the source.
If the profile name contains spaces or dashes, it has to be quoted.
This parameter supports auto-completion
-sourceGroup
If the name of your source profile is not unique across all profiles, you will need to specify
the group in which the profile is located with this parameter.
If the group name contains spaces or dashes, it has to be quoted.
-sourceConnection
Allows to specify a full connection definition as a single parameter (and thus does not
require a pre-defined connection profile).
The connection is specified with a comma separated list of key value pairs:
username - the username for the connection
password - the password for the connection
url - the JDBC URL
driver - the class name for the JDBC driver. If this is not specified, SQL Workbench/
J will try to determine the driver from the JDBC URL
driverJar - the full path to the JDBC driver. This not required if a driver for the
specified class is already configured
e.g.: "username=foo,password=bar,url=jdbc:postgresql://
localhost/mydb"
For a sample connection string please see the documentation for WbConnect.
If this parmeter is specified, -sourceProfile is ignored
-targetProfile
The name of the connection profile to use as the target connection. If -targetProfile
is not specified, the current connection is used as the target.
If the profile name contains spaces or dashes, it has to be quoted.
This parameter supports auto-completion
-targetGroup
If the name of your target profile is not unique across all profiles, you will need to specify
the group in which the profile is located with this parameter.
124
Parameter
Description
If the group name contains spaces or dashes, it has to be quoted.
-targetConnection
Allows to specify a full connection definition as a single parameter (and thus does not
require a pre-defined connection profile).
The connection is specified with a comma separated list of key value pairs:
username - the username for the connection
password - the password for the connection
url - the JDBC URL
driver - the class name for the JDBC driver. If this is not specified, SQL Workbench/
J will try to determine the driver from the JDBC URL
driverJar - the full path to the JDBC driver. This not required if a driver for the
specified class is already configured
e.g.: "username=foo,password=bar,url=jdbc:postgresql://
localhost/mydb"
If this parmeter is specified, -sourceProfile is ignored
-commitEvery
The number of rows after which a commit is sent to the target database. This parameter is
ignored if JDBC batching (-batchSize) is used.
-deleteTarget
-truncateTable
-mode
Defines how the data should be sent to the database. Possible values are INSERT,
UPDATE, 'INSERT,UPDATE' and 'UPDATE,INSERT'. Please refer to the description of
the WbImport command for details on.
-syncDelete
125
Parameter
Description
To only generate the SQL statements that would synchronize two databases, you can use
the command WbDataDiff
-keyColumns
Defines the key columns for the target table. This parameter is only necessary if import is
running in UPDATE mode. It is ignored when specifying more than one table with the sourceTable argument. In that case each table must have a primary key.
It is assumed that the values for the key columns will never be NULL.
-ignoreIdentityColumns
-batchSize
Enable the use of the JDBC batch update feature, by setting the size of the batch queue.
Any value greater than 1 will enable batch modee. If the JDBC driver supports this, the
INSERT (or UPDATE) performance can be increased.
This parameter will be ignored if the driver does not support batch updates or if
the mode is not UPDATE or INSERT (i.e. if -mode=update,insert or mode=insert,update is used).
-commitBatch
-continueOnError
Defines the behaviour if an error occurs in one of the statements. If this is set to true the
copy process will continue even if one statement fails. If set to false the copy process
will be halted on the first error. The default value is false.
With PostgreSQL continueOnError will only work, if the use of savepoints is
enabled using -useSavepoint=true.
-useSavepoint
-trimCharData
-showProgress
126
Description
-sourceSchema
The name of the schema to be copied. When using this parameter, all tables from the
specified schema are copied to the target. You must specify either -sourceSchema, sourceTable or -sourceQuery
-sourceTable
The name of the table(s) to be copied. You can either specifiy a list of tables: sourceTable=table1,table2. Or select the tables using a wildcard: sourceTable=* will copy all tables accessible to the user. If more than one table is
specified using this parameter, the -targetTable parameter is ignored.
-excludeTables
The tables listed in this parameter will not be copied. This can be used when all but a few
tables should be copied from one database to another. First all tables specified through
-sourceTable will be evaluated. The tables specified by -excludeTables can
include wildcards in the same way, -sourceTable allows wildcards.
-sourceTable=* -excludeTables=TEMP* will copy all tables, but not those
starting with TEMP.
This parameter supports auto-completion.
-checkDependencies
When copying more than one file into tables with foreign key constraints, this
switch can be used to import the files in the correct order (child tables first). When checkDependencies=true is passed, SQL Workbench/J will check the foreign key
dependencies for the tables specified with -sourceTable
-targetSchema
The name of the target schema into which the tables should be copied. When this
parameter is not specified, the default schema of the target connection is used.
-sourceWhere
-targetTable
The name of the table into which the data should be written. This parameter is ignored if
more than one table is copied.
-createTarget
If this parameter is set to true the target table will be created, if it doesn't exist. Valid
values are true or false.
Using -createTarget=true is intended as a quick and dirty way of
creating a target table "on the fly" during the copy process. Tables created
this way should not be considered "production-ready". The created tables
will only have the primary key and not-null constraints created. All other
constraints from the source table are ignored.
Because the automatic mapping of table definitions will only work in
the most simple cases this feature is not suited to synchronize the table
definitions between two different DBMS products.
Because of these limitations this feature can not considered a replacement
for a proper schema management. If you have the requirement to keep the
schema definition of different DBMS in sync please consider a tool like
Liquibase or Flyway. Do not try to use WbCopy for this.
If you want to migrate a table (or several tables) from one DBMS to another,
consider user WbSchemaReport together with an XSLT transformation
When using this option with different source and target DBMS, the information about
the data types to be used in the target database are retrieved from the JDBC driver. In
some cases this information might not be accurate or complete. You can enhance the
information from the driver by configuring your own mappings in workbench.settings.
Please see the section Customizing data type mapping for details.
127
Parameter
Description
If the automatic mapping generates an invalid CREATE TABLE statement, you will need
to create the table manually in the target database.
-removeDefaults
-tableType
When -createTarget is set to true, this parameter can be used to control the SQL
statement that is generated to create the target table. This is useful if the target table should
e.g. be a temporary table
When using the auto-completion for this parameter, all defined "create types" that are
configured in workbench.settings (or are part of the default settings) are displayed together
with the name of the DBMS they are used for. The list is not limited to definitions for the
target database! The specified type must nonetheless match a type defined for the target
connection. If you specify a type that does not exist, the default CREATE TABLE will be
used.
For details on how to configure a CREATE TABLE template for this parameter, please
refer to the chapter Settings related to SQL statement generation
-skipTargetCheck
Normally WbCopy will check if the specified target table does exist. However, some
JDBC drivers do not always return all table information correctly (e.g. temporary tables).
If you know that the target table exists, the parameter -skipTargetCheck=true can
be used to tell WbCopy, that the (column) definition of the source table should be assumed
for the target table and not further test for the target table will be done.
-dropTarget
-columns
Defines the columns to be copied. If this parameter is not specified, then all matching
columns are copied from source to target. Matching is done on name and data type. You
can either specify a list of columns or a column mapping.
When supplying a list of columns, the data from each column in the source table
will be copied into the corresponding column (i.e. one with the same name) in the
target table. If -createTarget=true is specified then this list also defines the
columns of the target table to be created. The names have to be separated by comma: columns=firstname, lastname, zipcode
A column mapping defines which column from the source table maps to which column of
the target table (if the column names do not match) If -createTarget=true then the
target table will be created from the specified target names: -columns=firstname/
surname, lastname/name, zipcode/zip Will copy the column firstname
from the source table to a column named surname in the target table, and so on.
This parameter is ignored if more than one table is copied.
When using a SQL query as the data source a mapping cannot be specified.
Please check Copying data based on a SQL query for details.
128
Parameter
Description
-adjustSequences
-preTableStatement postTableStatement
This parameter defines a SQL statement that should be executed before the import
process starts inserting data into the target table. The name of the current table (when e.g.
importing a whole directory) can be referenced using ${table.name}.
To define a statement that should be executed after all rows have been inserted and have
been committed, you can use the -postTableStatement parameter.
These parameters can e.g. be used to enable identity insert for MS SQL Server:
-preTableStatement="set identity_insert ${table.name} on"
-postTableStatement="set identity_insert ${table.name} off"
Errors resulting from executing these statements will be ignored. If you want to abort
the import in that case you can specify -ignorePrePostErrors=false and continueOnError=false.
These statements are only used if more than one table is processed.
Description
-sourceQuery
-columns
The list of columns from the target table, in the order in which they appear in the source
query.
If the column names in the query match the column names in the target table, this
parameter is not necessary.
If you do specify this parameter, note that this is not a column mapping. It only lists the
columns in the correct order .
129
22.6. Examples
22.6.1. Copy one table to another where all column names match
WbCopy -sourceProfile=ProfileA
-targetProfile=ProfileB
-sourceTable=the_table
-targetTable=the_other_table;
130
-targetProfile=ProfileB
-sourceTable=the_table
-sourceWhere="lastname LIKE 'D%'"
-targetTable=the_other_table;
This example will run the statement SELECT * FROM the_table WHERE lastname like 'D%' and copy
all corresponding columns to the target table the_other_table.
WbCopy -sourceProfile=ProfileA
-targetProfile=ProfileB
-sourceQuery="SELECT firstname as surname, lastname as name, birthday as dob FROM p
-targetTable=contacts
-deleteTarget=true
131
Description
-referenceProfile
The name of the connection profile for the reference connection. If this is not specified,
then the current connection is used.
-referenceGroup
If the name of your reference profile is not unique across all profiles, you will need to
specify the group in which the profile is located with this parameter.
-referenceConnection
Allows to specify a full connection definition as a single parameter (and thus does not
require a pre-defined connection profile).
The connection is specified with a comma separated list of key value pairs:
username - the username for the connection
password - the password for the connection
url - the JDBC URL
driver - the class name for the JDBC driver. If this is not specified, SQL Workbench/
J will try to determine the driver from the JDBC URL
driverJar - the full path to the JDBC driver. This not required if a driver for the
specified class is already configured
e.g.: "username=foo,password=bar,url=jdbc:postgresql://
localhost/mydb"
For a sample connection string please see the documentation for WbCopy.
If this parameter is specified -referenceProfile will be ignored.
-targetProfile
The name of the connection profile for the target connection (the one that needs to be
migrated). If this is not specified, then the current connection is used.
132
Parameter
Description
If you use the current connection for reference and target, then you should prefix the
table names with schema/user or use the -referenceschema and -targetschema
parameters.
-targetGroup
If the name of your target profile is not unique across all profiles, you will need to specify
the group in which the profile is located with this parameter.
-targetConnection
Allows to specify a full connection definition as a single parameter (and thus does not
require a pre-defined connection profile).
The connection is specified with a comma separated list of key value pairs:
username - the username for the connection
password - the password for the connection
url - the JDBC URL
driver - the class name for the JDBC driver. If this is not specified, SQL Workbench/
J will try to determine the driver from the JDBC URL
driverJar - the full path to the JDBC driver. This not required if a driver for the
specified class is already configured
e.g.: "username=foo,password=bar,url=jdbc:postgresql://
localhost/mydb"
For a sample connection string please see the documentation for WbConnect.
If this parameter is specified -targetProfile will be ignored.
-file
The filename of the output file. If this is not supplied the output will be written to the
message area
-referenceTables
A (comma separated) list of tables that are the reference tables, to be checked.
-targetTables
A (comma separated) list of tables in the target connection to be compared to the source
tables. The tables are "matched" by their position in the list. The first table in the referenceTables parameter is compared to the first table in the -targetTables
parameter, and so on. Using this parameter you can compare tables that do not have the
same name.
If you omit this parameter, then all tables from the target connection with the same names
as those listed in -referenceTables are compared.
If you omit both parameters, then all tables that the user can access are retrieved from the
source connection and compared to the tables with the same name in the target connection.
-referenceSchema
-targetSchema
A schema in the target connection to be compared to the tables from the reference schema.
-excludeTables
A comma separated list of tables that should not be compared. If tables from
several schemas are compared (using -referenceTables=schema_one.*,
schema_two.*) then the listed tables must be qualified with a schema, e.g. excludeTables=schema_one.foobar, schema_two.fubar
-encoding
The encoding to be used for the XML file. The default is UTF-8
-includePrimaryKeys
Select whether primary key constraint definitions should be compared as well. The default
is true. Valid values are true or false.
133
Parameter
Description
-includeForeignKeys
Select whether foreign key constraint definitions should be compared as well. The default
is true. Valid values are true or false.
-includeTableGrants
Select whether table grants should be compared as well. The default is false.
-includeTriggers
Select whether table triggers are compared as well. The default value is true.
-includeConstraints
Select whether table and column (check) constraints should be compared as well. SQL
Workbench/J compares the constraint definition (SQL) as stored in the database.
The default is to compare table constraints (true) Valid values are true or false.
-useConstraintNames
When including check constraints this parameter controls whether constraints should be
matched by name, or only by their expression. If comparing by names the diff output will
contain elements for constraint modification otherwise only drop and add entries will be
available.
The default is to compare by names(true) Valid values are true or false.
-includeViews
-includeProcedures
Select whether stored procedures should also be compared. When comparing procedures
the source as it is stored in the DBMS is compared. This comparison is case-sensitive. A
comparison across different DBMS will also not work!
The default is false Valid values are true or false.
-includeIndex
Select whether indexes should be compared as well. The default is to not compare index
definitions. Valid values are true or false.
-includeSequences
Select whether sequences should be compared as well. The default is to not compare
sequences. Valid values are true, false.
-useJdbcTypes
Define whether to compare the DBMS specific data types, or the JDBC data type returned
by the driver. When comparing tables from two different DBMS it is recommended
to use -useJdbcType=true as this will make the comparison a bit more DBMSindependent. When comparing e.g. Oracle vs. PostgreSQL a column defined as
VARCHAR2(100) in Oracle would be reported as being different to a VARCHAR(100)
column in PostgreSQL which is not really true As both drivers report the column
as java.sql.Types.VARCHAR, they would be considered as identical when using useJdbcType=true.
Valid values are true or false.
-additionalTypes
Select additional object types that are not compared by default (using the -includeXXX
parameters) such as Oracle TYPE definitions. Those objects are compared on source code
level (like procedures) rather than on attribute level.
Valid values are object type names as shown in the "Type" drop down in the DbExplorer.
134
Parameter
Description
-xsltParameter
A list of parameters (key/value pairs) that should be passed to the XSLT processor.
When using e.g. the wbreport2liquibase.xslt stylesheet, the value of the
author attribute can be set using -xsltParameter="authorName=42".
This parameter can be provided multiple times for multiple parameters, e.g. when
using wbreport2pg.xslt: -xsltParameter="makeLowerCase=42" xsltParameter="useJdbcTypes=true"
WbSchemaDiff Examples
Compare all tables between two connections, and write the output to the file migrate_prod.xml and convert the
XML to a series of SQL statements for PostgreSQL
WbSchemaDiff -referenceProfile="Staging"
-targetProfile="Production"
-file=migrate_prod.xml
-styleSheet=wbdiff2pg.xslt
-xsltOutput=migrate_prod.sql
Compare a list of matching tables between two databases and write the output to the file migrate_staging.xml
ignoring all tables that start with TMP_ and exclude any index definition from the comparison. Convert the output to a
SQL script for Oracle
WbSchemaDiff -referenceProfile="Development"
-targetProfile="Staging"
-file=migrate_stage.xml
-excludeTables=TMP_*
-includeIndex=false
-styleSheet=wbdiff2oracle.xslt
-xsltOutput=migrate_stage.sql
WbDataDiff requires that all involved tables have a primary key defined. If a table does not have a primary key,
WbDataDiff will stop the processing.
To improve performance (a bit), the rows are retrieved in chunks from the target table by dynamically constructing
a WHERE clause for the rows that were retrieved from the reference table. The chunk size can be controlled using
the property workbench.sql.sync.chunksize The chunk size defaults to 25. This is a conservative setting to
avoid problems with long SQL statements when processing tables that have a PK with multiple columns. If you know
that your primary keys consist only of a single column and the values won't be too long, you can increase the chunk
size, possibly increasing the performance when generating the SQL statements. As most DBMS have a limit on the
length of a single SQL statement, be careful when setting the chunksize too high. The same chunk size is applied when
generating DELETE statements by the WbCopy command, when syncDelete mode is enabled.
135
Parameter
Description
-referenceProfile
The name of the connection profile for the reference connection. If this is not specified,
then the current connection is used.
-referenceGroup
If the name of your reference profile is not unique across all profiles, you will need to
specify the group in which the profile is located with this parameter. If the profile's name
is unique you can omit this parameter
-referenceConnection
Allows to specify a full connection definition as a single parameter (and thus does not
require a pre-defined connection profile).
The connection is specified with a comma separated list of key value pairs:
username - the username for the connection
password - the password for the connection
url - the JDBC URL
driver - the class name for the JDBC driver. If this is not specified, SQL Workbench/
J will try to determine the driver from the JDBC URL
driverJar - the full path to the JDBC driver. This not required if a driver for the
specified class is already configured
e.g.: "username=foo,password=bar,url=jdbc:postgresql://
localhost/mydb"
For a sample connection string please see the documentation for WbCopy.
If this parameter is specified -referenceProfile will be ignored.
-targetProfile
The name of the connection profile for the target connection (the one that needs to be
migrated). If this is not specified, then the current connection is used.
If you use the current connection for reference and target, then you should prefix the
table names with schema/user or use the -referenceschema and -targetschema
parameters.
-targetGroup
If the name of your target profile is not unique across all profiles, you will need to specify
the group in which the profile is located with this parameter.
-targetConnection
Allows to specify a full connection definition as a single parameter (and thus does not
require a pre-defined connection profile).
The connection is specified with a comma separated list of key value pairs:
username - the username for the connection
password - the password for the connection
url - the JDBC URL
driver - the class name for the JDBC driver. If this is not specified, SQL Workbench/
J will try to determine the driver from the JDBC URL
driverJar - the full path to the JDBC driver. This not required if a driver for the
specified class is already configured
136
Parameter
Description
e.g.: "username=foo,password=bar,url=jdbc:postgresql://
localhost/mydb"
For a sample connection string please see the documentation for WbConnect.
If this parameter is specified -targetProfile will be ignored.
-file
The filename of the main script file. The command creates two scripts per table. One
script named update_<tablename>.sql that contains all needed UPDATE or
INSERT statements. The second script is named delete_<tablename>.sql
and will contain all DELETE statements for the target table. The main script merely
calls (using WbInclude) the generated scripts for each table. You can enable writing
a single file that includes all statements for all tables by using the parameter singleFile=true
-singleFile
If this parameter's value is true, then only one single file containing all statements will
be written.
-referenceTables
A (comma separated) list of tables that are the reference tables, to be checked. You can
specify the table with wildcards, e.g. -referenceTables=P% to compare all tables
that start with the letter P.
-targetTables
A (comma separated) list of tables in the target connection to be compared to the source
tables. The tables are "matched" by their position in the list. The first table in the referenceTables parameter is compared to the first table in the -targetTables
parameter, and so on. Using this parameter you can compare tables that do not have the
same name.
If you omit this parameter, then all tables from the target connection with the same names
as those listed in -referenceTables are compared.
If you omit both parameters, then all tables that the user can access are retrieved from the
source connection and compared to the tables with the same name in the target connection.
-referenceSchema
-targetSchema
A schema in the target connection to be compared to the tables from the reference schema.
-excludeTables
A comma separated list of tables that should not be compared. If tables from
several schemas are compared (using -referenceTables=schema_one.*,
schema_two.*) then the listed tables must be qualified with a schema, e.g. excludeTables=schema_one.foobar, schema_two.fubar
-checkDependencies
-includeDelete
-type
137
Parameter
Description
-encoding
The encoding to be used for the SQL scripts. The default depends on your operating
system. It will be displayed when you run WbDataDiff without any parameters. You
can overwrite the platform default with the property workbench.encoding in the file
workbench.settings
XML files are always stored in UTF-8
-sqlDateLiterals
-ignoreColumns
With this parameter you can define a list of column names that should not be considered
when comparing data. You can e.g. exclude columns that store the last access time of a
row, or the last update time if that should not be taken into account when checking for
changes.
They will however be part of generated INSERT or UPDATE statements unless exclueIgnored=true is also specified.
-excludeIgnored
-alternateKey
With this parameter alternate keys can be defined for the tables that are compared.
The parameter can be repeated multiple times to set the keys for multiple tables in the
following format: -alternateKey='table_1=column_1,column_2'
Note that each value has to be enclosed in either single or double quotes to mask the
equals sign embedded in the parameter value.
Once an alternate (primary) key has been defined, the primary key columns defined on the
tables are ignored. By default the real PK columns will however be included in INSERT
statement that are generated. To avoid this, set the parameter -excludeRealPK to true.
-excludeRealPK
-showProgress
138
WbDataDiff Examples
Compare all tables between two connections, and write the output to the file migrate_staging.sql, but do not
generate DELETE statements.
WbDataDiff -referenceProfile="Production"
-targetProfile="Staging"
-file=migrate_staging.sql
-includeDelete=false
Compare a list of matching tables between two databases and write the output to the file migrate_staging.sql
including DELETE statements.
WbDataDiff -referenceProfile="Production"
-targetProfile="Staging"
-referenceTables=person,address,person_address
-file=migrate_staging.sql
-includeDelete=true
Compare three tables that are differently named in the target database and ignore all columns (regardless in which table
they appear) that are named LAST_ACCESS or LAST_UPDATE
WbDataDiff -referenceProfile="Production"
-targetProfile="Staging"
-referenceTables=person,address,person_address
-targetTables=t_person,t_address,t_person_address
-ignoreColumns=last_access,last_update
-file=migrate_staging.sql
-includeDelete=true
139
Description
-file
-objects
A (comma separated) list of objects to report. Default is all objects that are "tables" or
views. The list of possible objects corresponds to the objects shown in the "Objects" tab of
the DbExplorer.
If you want to generate the report on tables from different schemas you have to use fully
qualified names in the list (e.g. -tables=shop.orders,accounting.invoices)
You can also specify wildcards in the table name: -table=CONTRACT_% will create an
XML report for all tables that start with CONTRACT_.
This parameter supports auto-completion.
-schemas
A (comma separated) list of schemas to generate the report from. For each user/schema
all tables are included in the report. e.g. -schemas=public,accounting would
generate a report for all tables in the schemas public and accounting.
If you combine -schemas with -objects, the list of objects will be
applied to every schema unless the object names are supplied with a schema: schemas=accounting,invoices -objects=o*,customers.c* will select
all objects starting with O from the schemas accounting,invoices and all objects
starting with C from the schema customers.
The possible values for this parameter correspond to the "Schema" dropdown in the
DbExplorer. The parameter supports auto-completion and will show a list of available
schemas.
-types
A (comma separated) list of "table like" object types to include. By default TABLEs and
VIEWs are included. To include e.g. SYSTEM VIEWs and TEMPORARY TABLEs, use the
following option: -types='TABLE,VIEW,SYSTEM VIEW,TEMPORARY TABLE'.
If you include type names that contain a space (or e.g. a dash) you have to quote the whole
list, not just the single value.
140
Parameter
Description
The default for this parameter is TABLE,VIEW
The values for this parameter correspond to the values shown in the "types" dropdown in
the "Objects" tab of the DbExplorer. The parameter supports auto-completion and will
show a list of the available object types for the current DBMS.
You can include any type shown in the DbExplorer's Objects tab. To
e.g. include domain and enum definitions for PostgreSQL use: types=table,view,sequence,domain,enum
This parameter supports auto-completion.
-excludeObjectNames
A (comma separated) list of tables to exclude from reporting. This is only used if -tables is
also specified. To create a report on all tables, but exclude those that start with 'DEV', use
-tables=* -excludeTableNames=DEV*
-objectTypeNames
-objectTypeNames='table:person,address' -objectTypeNames=sequence:t* -o
The type names are the same ones that can be used with the -types parameter. This can
be combined with schema qualified names:
-objectTypeNames='table:cust.person,accounting.address' -objectTypeName
This can also be used to restrict the retrieval of stored procedures: objectNameTypes=procedure:P* will include all stored procedures (or functions)
that start with a "P". In this case the parameter -includeProcedures is ignored.
If this parameter is used at least once, all of the following parameters are ignored:
-types and -objects, -includeSequences, -includeTables and includeViews are ignored.
The exclusion pattern defined through -excludeObjectNames is applied to all object
types.
-includeTables
Controls the output of table information for the report. The default is true. Valid values
are true, false.
-includeSequences
Control the output of sequence information for the report. The default is false. Valid
values are true, false.
Adding sequence to the list of types specified with the -types parameter has the same
effect.
-includeTableGrants
If tables are included in the output, the grants for each table can also be included with this
parameter. The default value is false.
-includeProcedures
Control the output of stored procedure information for the report. The default is false.
Valid values are true, false.
-includeTriggers
This parameter controls if table triggers are added to the output. The default value is
true.
141
Parameter
Description
-reportTitle
Defines the title for the generated XML file. The specified title is written into the tag
<report-title> and can be used when transforming the XML e.g. into a HTML file.
-writeFullSource
By default the sourcce code for views is written as retrieved from the DBMS into the
XML file. This might not be a complete create view statement though. When writeFullSource=true is specified SQL Workbench/J will generate a complete
create view statement, similar to the code that is shown in the DbExplorer.
The default is false. Valid values are: true, false.
-styleSheet
-xsltOutput
The name of the generated output file when applying the XSLT transformation.
-xsltParameter
A list of parameters (key/value pairs) that should be passed to the XSLT processor.
When using e.g. the wbreport2liquibase.xslt stylesheet, the value of the
author attribute can be set using -xsltParameter="authorName=42".
This parameter can be provided multiple times for multiple parameters, e.g. when
using wbreport2pg.xslt: -xsltParameter="makeLowerCase=42" xsltParameter="useJdbcTypes=true"
Description
-searchValues
-useRegex
-matchAll
-ignoreCase
-types
Specifies if the object types to be searched. The values for this parameter are the same as
in the "Type" drop down of DbExplorer's table list. Additionally the types function,
procedure and trigger are supported.
142
Parameter
Description
When specifying a type that contains a space, the type name neeeds to be enclosed in
quotes, e.g. -types="materialized view". When specifying multiple types, the
whole argument needs to be enclosed in quotes: -types='table, materialized
view'
The default for this parameter is view, procedure, function, trigger,
materialized view. To search in all available object types, use -types=*.
This parameter supports auto-completion.
-objects
A list of object names to be searched. These names may contain SQL wildcards, e.g. objects=PER%,NO%
This parameter supports auto-completion.
-schemas
Specifies a list of schemas to be searched (for DBMS that support schemas). If this
parameter is not specified the current schema is searched.
This parameter supports auto-completion.
The functionality of the WbGrepSource command is also available through a GUI at Tools Search in object source
Description
-searchValue
-ignoreCase
-compareType
-tables
A list of table names to be searched. These names may contain SQL wildcards, e.g. tables=PER%,NO%. If you want to search in different schemas, you need to prefix the
table names, e.g. -tables=schema1.p%,schema2.n%.
143
Parameter
Description
This parameter supports auto-completion.
-types
By default WbGrepData will search all tables and views (including materialized views).
If you want to search only one of those types, this can be specified with the -types
parameter. Using -types=table will only search table data and skip views in the
database.
This parameter supports auto-completion.
-excludeTables
A list of table names to be excluded from the search. If e.g. the wildcard for -tables
would select too many tables, you can exclude individual tables with this parameter. The
parameter values may include SQL wildcards.
-tables=p% -excludeTables=product_details,product_images
would process all tables starting with P but not the product_detail and the
product_images tables.
-retrieveCLOB
By default CLOB columns will be retrieved and searched. If this parameter is set to
false, CLOB columns will not be retrieved.
If the search value is not expected in columns of that type, excluding them from the search
will speed up data retrieval (and thus the searching).
Only columns reported as CLOB by the JDBC driver will be excluded. If the driver reports
a column as VARCHAR this parameter will not exclude that column.
-retrieveBLOB
By default BLOB columns will not be retrieved for those rows that match the criteria to
avoid excessive memory usage.
If BLOB columns should be retrieved, this parameter needs to be set to true. Enabling
this will not search inside the binary data. If BLOB columns should be searched (and
treated as character data), use the -treatBlobAs parameter
-treatBlobAs
If this parameter specifies a valid encoding, binary (aka "BLOB") columns will be
retrieved and converted to a character value using the specified encoding. That character
value is then searched.
-treatBlobAs="UTF-8" would convert all BLOB columns in all tables that are
searched to a character value using UTF-8 as the encoding. Therefore using this option
usually only makes sense if a single table is searched.
24.3.1. Examples
144
Description
-variable
-value
-file
-contentFile
145
Parameter
Description
-values
Define a comma separated list of values that are used in the dialog that is shown when
prompting for variable values.
More details and examples can be found in the chapter: Variable substitution
146
147
The long version of the command accepts additional parameters. When using the long version, the filename needs to be
passed as a parameter as well.
Only files up to a certain size will be read into memory. Files exceeding that size will be processed statement by
statement. In this case the automatic detection of the alternate delimiter will not work. If your scripts exceed the
maximum size and you do use the alternate delimiter you will have to use the long version of the command using the file and -delimiter parameters.
The command supports the following parameters:
Parameter
Description
-file
148
Parameter
Description
-continueOnError
Defines the behavior if an error occurs in one of the statements. If this is set to true then
script execution will continue even if one statement fails. If set to false script execution
will be halted on the first error. The default value is false
-delimiter
-encoding
Specify the encoding of the input file. If no encoding is specified, the default encoding for
the current platform (operating system) is used.
-verbose
Controls the logging level of the executed commands. -verbose=true has the same
effect as adding a WbFeedback on inside the called script. -verbose=false has the
same effect as adding the statement WbFeedback off to the called script.
-displayResult
By default any result set that is returned e.g. by a SELECT statement in the script will not
be displayed. By specfying -displayResult=true those results will be displayed.
-printStatements
If true, every SQL statement will be printed before execution. This is mainly intended for
console usage, but works in the GUI as well.
-showTiming
If true, display the execution time of every SQL statement and the overall execution time
of the script.
-useSavepoint
Control if each statement from the file should be guarded with a savepoint when executing
the script. Setting this to true will make execution of the script more robust, but also
slows down the processing of the SQL statements.
-ignoreDropErrors
-searchFor
-replaceWith
-ignoreCase
-useRegex
Defines search and replace parameters to change the SQL statements before they are sent
to the database. This can e.g. be used to replace the schema name in DDL script that uses
fully qualified table names.
The replacement is done without checking the syntax of the statements. If the search value
is contained in a string literal or a SQL comment, it is also replaced.
24.10.1. Examples
Execute my_script.sql
@my_script.sql;
Execute my_script.sql but abort on the first error
WbInclude -file="my_script.sql" -continueOnError=false;
149
Execute the script create_tables.sql and change all occurances of oldschema to new_schema
WbInclude -file=create_tables.sql -searchFor="oldschema." -replaceWith="new_schema."
Execute a large script that uses a non-standard statement delimiter:
WbInclude -file=insert_10million_rows.sql -delimiter='/';
Description
-file
The filename of the Liquibase changeLog (XML) file. The <include> tag is NOT
supported! SQL statements stored in files that are referenced using Liquibase's include
tag will not be processed.
-changeSet
A list of changeSet ids to be run. If this is omitted, then the SQL from all changesets
(containing supported tags) are executed. The value specified can include the
value for the author and the id, -changeSet="Arthur::42" selects the
changeSet where author="Arthur" and id="42". This parameter can be
repeated in order to select multiple changesets: -changeSet="Arthur::42" changeSet="Arthur::43".
You can specify wildcards before or after the double colon: -changeSet="*::42"
will select all changesets with the id=42. -changeSet="Author::*" will select all
changesets from "Arthur"
If the parameter value does not contain the double colon it is assumed to be an ID only: changeSet="42" is the same as -changeSet="*::42"
If this parameter is omitted, all changesets are executed.
This parameter supports auto-completion if the -file argument is specified.
-continueOnError
Defines the behaviour if an error occurs in one of the statements. If this is set to true
then script execution will continue even if one statement fails. If set to false script
execution will be halted on the first error. The default value is false
-encoding
Specify the encoding of the input file. If no encoding is specified, UTF-8 is used.
150
151
(id,
(id,
(id,
(id,
firstname,
firstname,
firstname,
firstname,
lastname)
lastname)
lastname)
lastname)
VALUES
VALUES
VALUES
VALUES
(1,
(2,
(3,
(4,
'Arthur', 'Dent');
'Ford', 'Prefect');
'Zaphod', 'Beeblebrox');
'Tricia', 'McMillian');
152
WHERE id=42;
Even if you specify more then one column in the column list, SQL Workbench/J will only use the first column. If
the SELECT returns more then one row, then one output file will be created for each row. Additional files will be
created with a counter indicating the row number from the result. In the above example, image.bmp, image_1.bmp,
image_3.bmp and so on, would be created.
WbSelectBlob is intended for an ad-hoc retrieval of a single LOB column. If you need to extract the contents of
several LOB rows and columns it is recommended to use the WbExport command.
You can also manipulate (save, view, upload) the contents of BLOB columns in a result set. Please refer to BLOB
support for details.
24.17.1. FEEDBACK
SET feedback ON/OFF is equivalent to the WbFeedback command, but mimics the syntax of Oracle's SQL*Plus
utility.
24.17.2. AUTOCOMMIT
With the command SET autocommit ON/OFF autocommit can be turned on or off for the current connection.
This is equivalent to setting the autocommit property in the connection profile or toggling the state of the SQL
Autocommit menu item.
24.17.3. MAXROWS
Limits the number of rows returned by the next statement. The behaviour of this command is a bit different between the
console mode and the GUI mode. In console mode, the maxrows stay in effect until you explicitely change it back using
SET maxrows again.
153
In GUI mode, the maxrows setting is only in effect for the script currently being executed and will only temporarily
overwrite any value entered in the "Max. Rows" field.
24.18.1. SERVEROUTPUT
SET serveroutput on is equivalent to the ENABLEOUT command and SET serveroutput off is
equivalent to DISABLEOUT command.
24.18.2. AUTOTRACE
This enables or disables the "autotrace" feature similar to the one in SQL*Plus. The syntax is equivalent to the
SQL*Plus command and supports the following options:
Option
Description
ON
Turns on autotrace mode. After running a statement, the statement result (if it is a query), the
statistics and the execution plan for that statement are displayed as separate result tabs.
OFF
TRACEONLY
REALPLAN
This is an extension to the SQL*Plus EXPLAIN mode. Using EXPLAIN, SQL Workbench/J will
simply run an EXPLAIN PLAN for the statement (and the statement will not be executed) - this is
the same behavior as SQL*Plus' EXPLAIN mode.
Using REALPLAN, SQL Workbench/J will run the statement and then retrieve the execution
plan that was generated while running the statement. This might yield a different result than
regular EXPLAIN mode. The actual plan also contains more details about estimated and
actual row counts. This plan is retrieved using dbms_xplan.display_cursor(). If
REALPLAN is used, the actual SQL statement sent to the server will be changed to include the
GATHER_PLAN_STATISTICS hint.
The information shown in autotrace mode can be controlled with two options after the ON or TRACEONLY parameter.
STATISTICS will fetch the statistics about the execution and EXPLAIN which will display the execution plan for the
statement. If not additional parameter is specified, EXPLAIN STATISTICS is used.
If statistics are requested, query results will be fetched from the database server but they will not be displayed.
Unlike SQL*Plus, the keywords (AUTOTRACE, STATISTICS, EXPLAIN) cannot be abbreviated!
For more information about the prerequisites for the autotrace mode, see the description of DBMS specific features.
154
This changes the mode for all editor tabs, not only for the one where you run the command.
Description
reset
normal
Makes all changes possible (turns off read only and confirmations)
confirm
readonly
The following example will turn on read only mode for the current connection, so that any subsequent statement that
updates the database will be ignored
WbMode readonly;
To change the current connection back to the settings from the profile use:
WbMode reset;
Description
-tables
-includeCreate
155
Parameter
Description
-outputFile
Defines the file into which all statements are written. If multiple tables are selected using the tables parameter, all statements will be written into this file.
-outputDir
Specifies an output directory into which one script per selected table will be written. The script files
are named drop_XXX.sql, where XXX is the name of the respective table. If this parameter is
used, -outputFile is ignored.
If neither -outputFile nor -outputDir is specified, the output is written to the message panel.
Description
-table
Specifies the root table of the hierarchy from which to delete the rows.
-columnValue
Defines the expression for each PK column to select the rows to be deleted. The value for this
parameter is the column name followed by a colon, followed by the value for this column or an
expression.
e.g.: -columnValue="person_id:42" will select rows where person_id has the value 42.
You can also specify expressions instead: -columnValue="id:<0" or columnValue="id:in (1,2,3)".
For a multi-column primary key, specify the parameter multiple times: columnValue="person_id:100" -columnValue="address_id:200".
The file into which the generated statements should be written. If this is omitted, the statements are
displayed in the message area.
-appendFile
-formatSql
To generate a script that deletes the person with ID=42 and all rows referencing that person, use the following
statement:
WbGenerateDelete -table=person -columnValue="id:42";
To generate a script that deletes any person with an ID greater than 10 and all rows referencing those rows, use the
following statement:
156
Description
-objects
A comma separated list of table (views or other objects), e.g. objects=customer,invoice,v_turnover,seq_cust_id. The parameter supports
specifying tables using wildcards -objects=cust%,inv%.
-exclude
A comma separated list of object names to be excluded from the generated script. The parameter
supports wildcards -exclude=foo*,bar*.
-schemas
A comma separated list of schemas. If this is not specified then the current (default) schema is used.
If this parameter is provided together with the -objects parameter, then the objects for each
schema are retrieved. e.g. -objects=person -schemas=prod,test will show generate the
SQL for the person table in both schemas.
The parameter supports auto-completion and will show a list of the available schemas.
-types
A comma separated list of object types e.g. -types=VIEW,TABLE. This parameter is ignored if objects is specified. The possible values for this parameter are the types listed in the drop down of
the "Objects" tab in the DbExplorer.
The parameter supports auto-completion and will show a list of the available object types for the
current DBMS.
-file
Defines the outputfile into which all statements are written. If this is not specified, the generated
SQL statements are shown in the message area. file.
If this parameter is is present (or set to true), then all triggers (for the selected schemas) will be
includeTriggers retrieved as well. The default is false.
If this parameter is present (or set to true), then all procedures and functions (for the selected
includeProceduresschemas) will be retrieved as well. The default is false.
-includeDrop
If this parameter is present (or set to true) a DROP statement will be generated for each object in the
list.
This parameter controls the generation of table grants. The default value is true.
includeTableGrants
-useSeparator
If this parameter is present (or set to true), comments will be added that identify the start and end of
each object. The default is false.
157
By default this command will only check the first 1000 lines of the input file, assuming that the values are distributed
evenly. If the data types for the columns do not reflect the real data, the sample size needs to be increased.
The generated table definition is intended for a quick way to import the data and thus the column definitions are likely
to be not completely correct or optimal.
The command supports the following parameters.
Parameter
Description
-file
-lines
-type
-useVarchar
-delimiter
-quoteChar
-encoding
-header
-dateFormat
-timestampFormat
-decimal
-outputFile
-sheetNumber
-table
-runScript
158
Parameter
Description
By default, the CREATE TABLE statement is only
generated and displayed. If -runScript=true is
specified, the generated SQL script will be executed
immediately.
By default, this will display a dialog to confirm the
execution the CREATE TABLE statement. This
confirmation can be suppressed using the parameter prompt=false. In this case the generated statement
will be run directly.
Description
-objects
-types
Limit the result to specific object types, e.g. WbList -objects=V% -types=VIEW will return
all views starting with the letter "V".
Description
-schema
159
Parameter
Description
-catalog
-tableName
Show only indexes for the tables specified by the parameter. The parameter value can contain a
wildcard, e.g. -tableName=VP% lists the indexes for all tables starting with VP
-indexName
Show only indexes with the specified name. The parameter value can contain a wildcard, e.g. indexName=PK% lists only indexes that start with PK
160
WbTableSource person
Description
-schema
Count the rows for tables from the given schemas, e.g. -schema=public,local
The parameter supports auto-completion and will show a list of available schemas.
-catalog
-objects
Show only the row counts for the tables (or views) specified by the parameter. The parameter value
can contain wildcards, e.g. -objects=PR%,ORD% will count the rows for all tables with names
that either start with PR or ORD
The parameter supports auto-completion and will show a list of available tables.
-types
Define the types of objects which should be selected. By default only tables are considered. If you
also want to count the rows for views, use -types=table,view
The parameter supports auto-completion and will show a list of available object types.
-orderBy
Defines how the resulting table should be sorted. By default it will be sorted alphabetically by table
name. The -orderBy parameter specifies the columns to sort the result by. By default, sorting
is done ascending, if you want a descending sort, append :desc to the column name, e.g.: orderBy="rowcount:desc".
So sort by multiple columns separate the column names with a comma: orderBy="rowcount:desc,name:desc" or -orderBy="rowcount,name:desc"
161
Parameter
Description
The name database can be used instead of catalog.
If none of the above parameters are used, WbRowCount assumes that a list ot table names was
specified. WbRowCount person,address,ordersis equivalent to WbRowCount objects=person,address,orders. When called without any parameters the row counts for all tables
accessible to the current user will be displayed.
Unlike the Count rows item in the DbExplorer, WbRowCount displays the result for all tables once it is finished. It
does not incrementally update the output.
Description
-profile
-profileGroup
Specifies the group in which the profile is stored. This is only required if the profile name is
not unique
Description
-connection
Allows to specify a full connection definition as a single parameter (and thus does not require
a pre-defined connection profile).
The connection is specified with a comma separated list of key value pairs:
username - the username for the connection
password - the password for the connection
url - the JDBC URL
driver - the class name for the JDBC driver. If this is not specified, SQL Workbench/J
will try to determine the driver from the JDBC URL
162
Parameter
Description
driverJar - the full path to the JDBC driver. This not required if a driver for the
specified class is already configured
e.g.: "username=foo,password=bar,url=jdbc:postgresql://
localhost/mydb"
If an appropriate driver is already configured the driver's classname or the JAR file don't
have to be specified.
If an appropriate driver is not configured, the driver's jar file must be specified:
"username=foo,password=bar,url=jdbc:postgresql://localhost/
mydb,driverjar=/etc/drivers/postgresql.jar"
SQL Workbench/J will try to detect the driver's classname automatically (based on the JDBC
URL).
If this parameter is specified, -profile is ignored.
The individual parameters controlling the connection behavior can be used together with connection, e.g. -autocommit or -fetchSize
Description
-url
-username
-password
-driver
-driverJar
Specify the full pathname to the .jar file containing the JDBC driver
-autocommit
Set the autocommit property for this connection. You can also control the autocommit mode
from within your script by using the SET AUTOCOMMIT command.
-rollbackOnDisconnect If this parameter is set to true, a ROLLBACK will be sent to the DBMS before the connection
is closed. This setting is also available in the connection profile.
-checkUncommitted
If this parameter is set to true, SQL Workbench/J will try to detect uncommitted changes in
the current transaction when the main window (or an editor panel) is closed. If the DBMS
does not support this, this argument is ignored. It also has no effect when running in batch or
console mode.
-trimCharData
Turns on right-trimming of values retrieved from CHAR columns. See the description of the
profile properties for details.
-removeComments
This parameter corresponds to the Remove comments setting of the connection profile.
-fetchSize
This parameter corresponds to the Fetch size setting of the connection profile.
-ignoreDropError
This parameter corresponds to the Ignore DROP errors setting of the connection profile.
-altDelimiter
This parameter corresponds to the Alternate delimiter setting of the connection profile.
If none of the parameters is supplied when running the command, it is assumed that any value after WbConnect is the
name of a connection profile, e.g.:
163
WbConnect production
will connect using the profile name production, and is equivalent to
WbConnect -profile=production
Description
-inputfile
-xsltoutput
-stylesheet
-xsltParameter
A list of parameters (key/value pairs) that should be passed to the XSLT processor.
When using e.g. the wbreport2liquibase.xslt stylesheet, the value of the
author attribute can be set using -xsltParameter="authorName=42".
This parameter can be provided multiple times for multiple parameters, e.g. when
using wbreport2pg.xslt: -xsltParameter="makeLowerCase=42" xsltParameter="useJdbcTypes=true"
Description
-program
-argument
One commandline argument for the program. This parameter can be repeated multiple
times.
164
Parameter
Description
-dir
Description
-name
-group
The name of the macro group in which the new macro should be stored
165
Parameter
Description
-text
-file
A file from which to read the macro text. If this parameter is supplied, -text is ignored
-encoding
The encoding of the input file specified with the -file parameter.
-expand
If true then the new macro is a macro that is expanded while typing
Description
-ifDefined
The command is only executed if the variable with the specified name is defined. ifDefined=some_var
-ifNotDefined
The command is only executed if the variable with the specified name is defined. ifNotDefined=some_var
-ifEquals
The command is only executed if the specified variable has a specific value ifEquals='some_var=42'
-ifNotEquals
The command is only executed if the specified variable has a specific value ifNotEquals='some_var=42'
-ifEmpty
The command is only executed if the specified variable is defined but has an
empty value -ifEmpty=some_var. This is essentially a shorthand for ifEquals="some_var=''"
-ifNotEmpty
The command is only executed if the specified variable is defined and has a a non
empty value -ifNotEmpty=some_var. This is essentially a shorthand for ifNotEquals="some_var=''"
166
Not all configuration parameters are available through the Options Dialog and have to be changed manually in the file
workbench.settings. Editing the file requires to close the application.
When using WbSetConfig configuration properties can be changed permanently without restarting SQL Workbench/
J.
Any value that is changed through this command will be saved automatically in workbench.settings when the
application is closed.
If you want to e.g. disable the use of Savepoints in the SQL statements entered interactively, the following command
will turn this off for PostgreSQL:
WbSetConfig workbench.db.postgresql.sql.usesavepoint=false
For a list of configuration properties that can be changed, please refer to Advanced configuration options
If you supply only the property key, the current value will be displayed. If no argument is supplied for
WbSetConfig all properties are displayed. You can also supply a partial property key. WbSetConfig
workbench.db.postgresql will list all PostgreSQL related properties. You can directly edit the properties in the
result set.
The value [dbid] inside the property name will get replaced with the current DBID.
The following command changes the property named workbench.db.postgresql.ddlneedscommit if the
current connection is against a PostgreSQL database:
WbSetConfig workbench.db.[dbid].ddlneedscommit=true
167
25. DataPumper
25.1. Overview
The export and import features are useful if you cannot connect to the source and the target database at once. If your
source and target are both reachable at the same time, it is more efficient to use the DataPumper to copy data between
two systems. With the DataPumper no intermediate files are necessary. Especially with large tables this can be an
advantage.
To open the DataPumper, select Tools DataPumper
The DataPumper lets you copy data from a single table (or SELECT query) to a table in the target database. The
mapping between source columns and target columns can be specified as well
Everything that can be done with the DataPumper, can also be accomplished with the WbCopy command. The
DataPumper can also generate a script which executes the WbCopy command with the correct parameters according to
the current settings in the window. This can be used to create scripts which copy several tables.
The DataPumper can also be started as a stand-alone application - without the main window - by
specifying -datapumper=true in the command line when starting SQL Workbench/J.
When opening the DatPumper from the main window, the main window's current connection will be used
as the initial source connection. You can disable the automatic connection upon startup with the property
workbench.datapumper.autoconnect in the workbench.settings file.
168
After both tables are selected, the middle part of the window will display the available columns from the source and
target table. This grid display represents the column mapping between source and target table.
169
For maximum performance, choose the update strategy that will result in a successful first statement more often. As a
rule of thumb:
Use -mode=insert,update, if you expect more rows to be inserted then updated.
Use -mode=update,insert, if you expect more rows to be updated then inserted.
The DataPumper tries to map the column types from the source columns to data types available on the target database.
For this mapping it relies on information returned from the JDBC driver. The functions used for this may not be
implemented fully in the driver. If you experience problems during the creation of the target tables, please create the
tables manually before copying the data. It will work best if the source and target system are the same (e.g. PostgreSQL
to PostgreSQL, Oracle to Oracle, etc).
Most JDBC drivers map a single JDBC data type to more then one native datatype. MySql maps its VARCHAR, ENUM
and SET type to java.sql.Types.VARCHAR. The DataPumper will take the first mapping which is returned by the
driver and will ignore all subsequent ones. Any datatype that is returned twice by the driver is logged as a warning in
the log file. The actual mappings used, are logged with type INFO.
To customize the mapping of generic JDBC datatypes to DBMS specific datatypes, please refer to Customizing data
type mapping
170
Export data
This will execute a WbExport command for the currently selected table(s). Choosing this option is
equivalent to do a SELECT * FROM table; and then executing SQL Export query result from
the SQL editor in the main window. See the description of the WbExport command for details.
When using this function, the customization for data types is not applied to the generated SELECT
statement.
Count rows
This will count the rows for each selected table object. The rowcounts will be opened in a new
window. This is the same functionality as the WbRowCount command.
171
Drop
Drops the selected objects. If at least one object is a table, and the currently used DBMS supports
cascaded dropping of constraints, you can enable cascaded delete of constraints. If this option is
enabled SQL Workbench/J would generate e.g. for Oracle a DROP TABLE mytable CASCADE
CONSTRAINTS. This is necessary if you want to drop several tables at the same time that have
foreign key constraints defined.
172
If the current DBMS does not support a cascading drop, you can order the tables so that foreign
keys are detected and the tables are dropped in the right order by clicking on the Check foreign keys
button.
If the checkbox "Add missing tables" is selected, any table that should be dropped before any of the
selected tables (because of foreign key constraints) will be added to the list of tables to be dropped.
Delete data
Deletes all rows from the selected table(s) by executing a DELETE FROM table_name; to
the server for each selected table. If the DBMS supports TRUNCATE then this can be done with
TRUNCATE as well. Using TRUNCATE is usually faster as no transaction state is maintained.
The list of tables is sorted according to the sort order in the table list. If the tables have foreign key
constraints, you can re-order them to be processed in the correct order by clicking on the Check
foreign keys button.
If the check box "Add missing tables" is selected, any table that should be deleted before any of the
selected tables (because of foreign key constraints) will be added to the list of tables.
ALTER script
After you have changed the name of a table in the list of objects, you can generate and run a SQL
script that will apply that change to the database.
For details please refer to the section Changing table definitions
173
174
The data in the tab can be edited just like the data in the main window. To add or delete rows, you can either use the
buttons on the toolbar in the upper part of the data display, or the popup menu. To edit a value in a field, simply double
click that field, start typing while the field has focus (yellow border) or hit F2 while the field has focus.
175
Another example is to replace the retrieval of XML columns. To configure the DbExplorer to convert Oracle's XMLTYPE
a string, the following line in workbench.settings is necessary:
workbench.db.oracle.selectexpression.xmltype=extract(${column}, '/').getClobVal()
To convert DB2's XML type to a string, the following configuration can be used:
workbench.db.db2.selectexpression.xml=xmlserialize(${column} AS CLOB)
The column name (as displayed in the result set) will usually be generated by the DBMS and will most probably not
contain the real column name. In order to see the real column name you can supply a column alias in the configuration.
176
To display the procedure's source code SQL Workbench/J uses its own SQL queries. For most popular DBMS systems
the necessary queries are built into the application. If the procedure source is not displayed for your DBMS, please
contact the author.
Functions inside Oracle packages will be listed separately on the left side, but the source code will contain all functions/
procedures from that package.
Two different implementations of the search are available: server side and client side.
177
The client side search retrieves every row from the server, compares the retrieved values for each row and keeps the
rows where at least one column matches the defined search criteria.
As opposed to the server side search, this means that every row from the selected table(s) will be sent from the database
server to the application. For large tables were only a small number of the rows will match the search value this can
increase the processing time substantially.
As the searching is done on the client side, this means that it can also "search" data types that cannot be using for a
LIKE query such as CLOB, DATE, INTEGER.
The search criteria is defined similar to the definition of a filter for a result set. For every column, its value will be
converted to a character representation. The resulting string value will then be compared according to the defined
comparator and the entered search value. If at least one column's value matches, the row will be displayed. The
comparison is always done in a case-insensitively. The contents of BLOB columns will never be searched.
The character representation that is used is based on the default formatting options from the Options Window. This
means that e.g. a DATE column will be compared according to the standard formatting options before the comparison is
done.
The client side search is also available through the WbGrepData command
178
The elements of each part of the tree are only loaded when the node is expanded for the first time.
Export data
This exports the data from the selected table(s). This is identical to the function in the DbExplorer.
179
Count rows
This will count the rows for each selected table object. The row counts will be shown in parentheses
next to the table name. This is the same functionality as the WbRowCount command.
Drop
Drops the selected objects. This is identical to the function in the DbExplorer.
180
Delete data
Deletes all rows from the selected table(s) by executing a DELETE FROM table_name; to the
server for each selected table. This is identical to the function in the DbExplorer.
181
182
The time zone should be specified relativ to GMT and not with a logical name. If you are in Germany and DST is
active, you need to use -Duser.timezone=GMT+2. Specifying -Duser.timezone=Europe/Berlin does
usually not work.
When using the Windows launcher you have to prefix the paramter with -J to identify it as a parameter for the Java
runtime not for the application.
183
184
By default Oracle's JDBC driver does not return comments made on columns or tables (COMMENT ON ..). Thus your
comments will not be shown in the database explorer.
To enable the display of column comments, you need to pass the property remarksReporting to the driver.
In the profile dialog, click on the Extended Properties button. Add a new property in the following window with the
name remarksReporting and the value true. Now close the dialog by clicking on the OK button.
Turning on this features slows down the retrieval of table information e.g. in the Database Explorer.
When you have comments defined in your Oracle database and use the WbSchemaReport command, then you have to
enable the remarks reporting, otherwise the comments will not show up in the report.
29.2. MySQL
185
186
29.3.4. The Microsoft Driver throws an Exception when using SET SHOWPLAN_ALL
When displaying an execution plan using SET SHOWPLAN_ALL ON and the following error is thrown: The TDS
protocol stream is not valid. Unexpected token TDS_COLMETADATA (0x81). please set
"Max. Rows" to 0 for that SQL panel. Apparently the driver cannot handle showing the execution plan and having the
result limited.
187
188
If you experience intermittent "Connection closed" errors when running SQL statements, please verify that
charsets.jar is part of your JDK/JRE installation. This file is usually installed in jre\lib\charsets.jar.
189
29.5. PostgreSQL
190
Encrypt passwords
If this option is enabled, the password stored within a connection profile will be encrypted. Whether
the password should be stored at all can be selected in the profile itself.
Using this option only supplies very limited security. As the source code for SQL
Workbench/J is freely available, the algorithm to decrypt the passwords stored in this
way can easily be extracted to retrieve the plain text passwords.
191
Scroll tabs
This option controls the behavior of the tab display, if more tabs are opened than can be displayed in
the current width of the window.
If this option is enabled, the tabs are always displayed in a single row. If too many tabs are open, the
row can be scrolled to the display the tabs that are not visible.
If this option is disabled, the tabs are displayed in multiple rows, so that all tabs are always visible.
192
Log Level
With this option you can control the level of information written to the application log. The most
verbose level is DEBUG. With ERROR only severe errors (either resulting from running a user
command or from an internal error) are written to the application log.
When using Log4J as the logger, this will change the log level of the root logger.
File format
This property controls the line terminator used when a file is saved by the editor. Changing this
property affects the next save operation.
History size
The number of statements per tab which should be stored in the statement history. Remember that
always the full text of the editor (together with the selection and cursor information) is stored in the
history. If you have large amounts of text in the editor and set this number quite high, be aware of the
memory consumption this might create.
Files in history
If this option is enabled, the content of external files is also stored in the statement history.
Electric scroll
Electric scrolling is the automatic scrolling of the editor when clicking into lines close to the upper or
lower end of the editor window. If you click inside the defined number of lines at the upper or lower
end, then the editor will scroll this line into the center of the visible area. The default is set to 3, which
means that if you click into (visible) line 1,2 or 3 of the editor, this line will be centered in the display.
193
194
Show statement and allow retry - this includes the error message and the complete SQL statement
that failed. It allows to edit and re-submit the statement.
Alternate Delimiter
This options defines the default alternate delimiter. You can override this default in the connection
profile, to use different delimiters for different DBMS. For details see using the alternate delimiter
Highlight errors
If "Highlight errors" is enabled then the statement that generated an error is highlighted after
execution.
195
This does not influence the behavior when running scripts in batch mode or when using
the WbInclude command.
196
Selected text
The color that is used to highlight selected text.
Data font
The font that is used to display result sets. This includes the object list and results in the DbExplorer.
Message font
The font that is used in the message pane of the SQL window.
Standard font
The standard font that is used for menus, lables, buttons etc.
197
Filter by quicksearch
When this option is enabled, only those entries are shown in the popup that match the entered values
in the quick search.
198
When this option is selected, the filename that is loaded in the editor tab will be stored
in the workspace. The next time the workspace is loaded the file is opened as well. This
is the default setting
Content only
When this option is selected, only the content of the editor tab is save (just like any
other editor tab), but the link to the filename is removed. The next time the workspace
is loaded, the file will not be opened.
Nothing
Neither the content, nor the filename will be saved. The next time th workspace is
loaded, the editor tab will be empty.
199
Bold header
If this option is enabled, the name of the columns in the result is shown with a bold font, instead of
the regular data font.
NULL string
The specified value will be displayed instead of NULL values in the result of a SQL statement.
Append results
This option defines the default behavior for appending results when a new editor tab is opened.
Number alignment
This controls the alignment of numbers in the result grid.
Sort Locale
When you sort the result set, characters values will be sorted case-sensitive by default. This is caused
by the compareTo() method available in the Java environment which puts lower case characters
in front of upper case characters when sorting. With the "Sort Locale" option you can select which
language rules should be applied while sorting. Note that sorting with a locale is slower than using the
"Default" setting.
200
201
Not every (character) column is displayed in a manner that multiple lines will be displayed. The
default setting is to always display CLOB columns as multi line. VARCHAR (and CHAR) columns will
only be displayed in multi line mode if they can hold more than 250 characters. This limit can be
changed.
Year (Number)
AM/PM marker
Minute in hour
Second in minute
Milliseconds
202
Letter Description
Z
Decimal symbol
The character which is used as the decimal separator when displaying numbers.
Decimal digits
Define the maximum number of digits which will be displayed for numeric columns. This only affects
the display of the number, not the storage or retrieval. Internally they are still stored as the DBMS
returned them. To see the internal value, leave the mouse cursor over the cell. The tool tip which is
displayed will contain the number as it was returned by the JDBC driver. When exporting data or
copying it to the clipboard, the real value will be used.
If this value is set to 0 (zero) values will be display with as many digits as available.
203
Default PK Map
This property defines a mapping file for primary key columns. The information from that file is
read whenever the primary keys for a table of cannot be obtained from the database. For a detailed
description on how to define extra primary key columns, please refer to the WbDefinePk command.
DB Explorer as Tab
The Database Explorer can either be displayed as a separate window or inside the main window as
a another tab. If this option is selected, the Db Explorer will be displayed inside the main window.
If the option Retrieve DB Explorer is checked as well, the current database scheme will be retrieved
upon starting SQL Workbench/J
204
By default triggers are shown only in the details of a table. If the option "Show trigger panel" is
selected, an additional panel will be displayed in the DbExplorer that displays all triggers in the
database independently of their table.
Show focus
When this option is selected, a rectangle indicating the currently focused panel will be displayed, to
indicate the component that will received keystrokes e.g. shortcuts such as Ctrl-R.
Partial match
If this option is enabled, then any text that is typed into the quick filter will matched anywhere in
the object name. It is equivalent to typing *foo* into the quick filter. If this option is enabled and
a wildcard is part of the value, then only that wildcard is used. Using foo* for the filter while this
option is enabled, shows all objects that start with foo.
This option is only available when the use of regular expressions in the quick filter is disabled.
205
206
Separator
If you select to display the current profile's name and group, you can select the character that
separates the two names.
Editor Filename
If the current editor tab contains an external file, you can choose if and which information about the
file should be displayed in the window title. You can display nothing, only the filename or the full
path information about the current file. The information will be displayed behind the current profile
and workspace name.
Columns in SELECT
This property defines the number of columns the formatter puts in on line when formatting a SELECT
statement. The default of 1 (one) will put each column into a separate line:
SELECT p.name,
p.firstname,
a.city,
a.zip
207
FROM person p
JOIN address a ON p.person_id = a.person_id;
If this is set to 2, this would result in the following formatted SELECT:
Columns in INSERT
This property defines the number of columns the formatter puts in on line when formatting an
INSERT statement. A value of one will list each column in a separate line in the INSERT part and
the VALUES part
Columns in UPDATE
This property defines the number of columns the formatter puts in on line when formatting an
UPDATE statement. A value of 1 (one) will put each column into a separate line:
UPDATE person
SET firstname = 'Arthur',
lastname = 'Dent'
208
WHERE id = 42;
With a value of 2, the above example would be formatted as follows:
UPDATE person
SET firstname = 'Arthur', lastname = 'Dent'
WHERE id = 42;
Keywords
This option defines if standard SQL keywords are generated in upper case, lower case or left
unchanged.
Identifiers
This option defines if identifiers (table names, column names, ...) are generated in upper case, lower
case or left unchanged.
Functions
This option defines if the names of SQL functions are generated in upper case, lower case or left
unchanged. This does not apply to user-written functions, only standard functions available for the
current DBMS.
JOIN wrapping
This option controls how conditions for JOIN operators are generated
Never
The JOIN condition is always kept on a single line:
SELECT *
FROM person p
JOIN address a ON p.person_id = a.person_id;
Always
the JOIN condition is always written on a new line:
SELECT *
FROM person p
JOIN address a
ON p.person_id = a.person_id;
Multiple conditions
the JOIN condition is generated on multiple lines only if the join involves more than
one condition:
SELECT *
FROM person p
JOIN address a ON p.person_id = a.person_id;
JOIN address_details ad
ON ad.address_id = a.address_id
209
Executable
This is the full path to the formatter's program
210
Command line
The command line configures the parameters passed to the formatter. The input file for the formatter
can be specified by using the placeholder ${wbin}. If no input file is specified on the command
line, the SQL statement will be passed through stdin. If the formatter writes the output to a file, the
placeholder ${wbout} can be used in the command line. If no output file is specified the result will
be read from stdout of the process.
Enabled
This option can be used to turn of the usage of a formatter without deleting the definition.
Format INSERTs
If formatting of INSERT is enabled, the way they generated INSERT statements are formatted using
the SQL formatter before they are displayed.
Format DELETEs
If formatting of DELETE is enabled, the way they generated DELETE statements are formatted using
the SQL formatter before they are displayed.
211
When you switch the current Look & Feel, you will need to restart the application to activate the new look and feel.
Note that if you switch the current Look & Feel it will be changed, regardless whether you close the options dialog
using Cancel or OK.
212
213
32.1. DBID
DBMS specific settings are controlled through properties that contain a DBMS specific value, called the the DBID. This
DBID is displayed in the connection info dialog (right click on the connection URL in the main window, then choose
"Connection Info").
The DBID is also reported in the log file:
INFO
If the description for a property in this chapter refers to the "DBID", then this value has to be used.
If the DBID is part of a property key this will be referred to as [dbid] in this chapter.
When using WbSetConfig you can use the value [dbid] inside the property name and it will get
replaced with the current DBID automatically. The following command changes the property named
workbench.db.postgresql.ddlneedscommit if the current connection is against a PostgreSQL database:
WbSetConfig workbench.db.[dbid].ddlneedscommit=true
214
Default: true
215
216
217
To avoid blocking of the table list retrieval, the isolation level used in the DbExplorer can be switched to
READ_UNCOMMITTED for DBMS that support this. This is e.g. necessary for Microsoft SQL Server as an
uncommitted DDL statement from a different connection can block the SELECT statement that retrieves the table
information.
The isolation level will only be changed if Separate connection per tab is enabled.
For Microsoft SQL Server the timeout waiting for such a lock can be configured as an alternative.
Default values:
For Microsoft SQL Server: true
workbench.db.objectinfo.includedeps
workbench.db.[dbid].objectinfo.includedeps
If Object info is invoked, this setting controls if dependent objects (indexes, triggers) are also displayed for tables. This
setting serves as a default for all DBMS. Displaying dependent objects can also be controlled on per DBMS by adding
the DBID to the property key. The value without the DBID serves as a default setting for all DBMS.
Default: false
workbench.db.objectinfo.includefk
workbench.db.[dbid].objectinfo.includefk
If Object info is invoked, this setting controls if foreign key constraints are also displayed when dependent objects are
displayed for tables. This setting serves as a default for all DBMS. When adding the DBID to the property key this is
controlled on a per DBMS level.
Default: false
218
When opening the DataPumper as a separate window it will connect to the current profile as the source connection. If
you do not want the DataPumper to connect automatically set this property to false
Default: true
COMMIT/ROLLBACK behaviour
Property: workbench.db.[dbid].usejdbccommit
Possible values: true, false
Some DBMS return an error when COMMIT or ROLLBACK is sent as a regular command through the JDBC interface. If
the DBMS is listed here, the JDBC functions commit() or rollback() will be used instead.
Default: false
219
No default
Default:
oracle.jdbc.driver.OracleDriver,oracle.jdbc.OracleDriver,org.postgresql.Driver,org.hsqldb.
Filtering synonyms
Property: workbench.db.[dbid].exclude.synonyms
The database explorer and the auto completion can display (Oracle public) synonyms. Some of these are usually not
of interest to the end user. Therefor the list of displayed synonyms can be controlled. This property defines a regular
expression. Each synonym that matches this regular expression, will be excluded from the list presented in the GUI.
Default value (for Oracle): ^AQ\\$.*|^MGMT\\$.*|^GV\\$.*|^EXF\\$.*|^KU\\$_.*|^WM\\$.*|
^MRV_.*|^CWM_.*|^CWM2_.*|^WK\\$_.*|^CTX_.*
220
221
Note that enabling savepoints can drastically reduce the performance of the import.
Default value: false
BIGINT = -5
BINARY = -2
BIT = -7
BLOB = 2004
BOOLEAN = 16
222
CHAR = 1
NCHAR = -15
CLOB = 2005
NCLOB = 2011
DATE = 91
DECIMAL = 3
DOUBLE = 8
FLOAT = 6
INTEGER = 4
LONGVARBINARY = -4
LONGVARCHAR = -1
LONGNVARCHAR = -16
NUMERIC = 2
REAL = 7
SMALLINT = 5
TIME = 92
TIMESTAMP = 93
TINYINT = -6
VARBINARY = -3
VARCHAR = 12
NVARCHAR = -9
ROWID = -8
SQLXML = 2009
223
224
The Oracle driver does not report the type of NVARCHAR2 columns correctly. They are returned as Types.OTHER.
If this property is enabled, than SQL Workbench/J is also using it's own SELECT statement to retrieve the table
definition.
Default value: true
225
SQL Workbench/J will use the table function fn_listextendedproperty to retrieve the extended property defined by this
configuration setting to retrieve remarks.
Default value: MS_DESCRIPTION
Default:
workbench.db.ignore.oracle=quit,exit,whenever,spool,rem,clear,break,btitle,column,change,r
226
By default the WbInclude command can be shortened using the @ sign. This behaviour is disabled for MS SQL to
avoid conflicts with parameter definitions in stored procedures. This property contains a list of DBIDs for which this
should be enabled. To enable this for all DBMS, simply use * as the value for this property.
Default: oracle, rdb, hsqldb, postgresql, mysql, adaptive_server_anywhere,
cloudscape, apache_derby
227
Log level
Property: workbench.log.level
228
Set the log level for the log file. Valid values are
DEBUG
INFO
WARN
ERROR
Default: INFO
Log format
Property: workbench.log.format
Define the elements that are included in log messages. The following placeholders are supported:
{type}
{timestamp}
{message}
{error}
{source}
{stacktrace}
This property does not define the layout of the message, only the elements that are logged.
If the log level is set to DEBUG, the stacktrace will always be displayed even if it is not included in the format string.
If you want more control over the log file and the format of the message, please switch the logging to use Log4J.
Default: {type} {timestamp} {message} {error}
229
If this is set to true the SQL queries used to retrieve DBMS specific meta data (such as view/procedure/trigger source,
defined triggers/views) will be logged with level INFO.
This can be used to debug customized SQL statements for DBMS's which are not (yet) pre-configured.
Default: false
If the Log4J classes are not found, the built-in logging will be used (see above)
When Log4J logging is enabled, none of the logging properties described in the previous section will be used. You have
to configure everything through log4j.xml.
When using Help Show log file with Log4J enabled, and you have configured Log4J to write to multiple files, only
the first file will be shown.
When SQL Workbench/J initializes the logging environment, it also adds two system property that can be used to define
the logfile relative to the configuration or the installation directory:
workbench.config.dir contains the full path to the configuration directory
workbench.install.dir contains the full path to the directory where sqlworkbench.jar is located
These properties can be used to put the logfile into the directory relative to the config or installation directory without
the need to hardcode the directory name in log4j.xml
A sample log4j.xml can be found in the scripts directory of the SQL Workbench/J distribution.
The system properties that are set by SQL Workbench/J to point to the configuration and installation directory (see
above) can also be used in the log4j.xml file.
230
231
232
workbench.db.postgresql.retrieve.create.index.query=select pg_get_indexdef('%fq_index_name
Using Oracle's DBMS_METADATA to retrieve the index source, is controlled through an Oracle specific configuration
property.
233
234
The properties file can contain multiple profiles, each property key has to start with the prefix profile. The second
element of the key is a unique identifier for the profile that is used to combine the keys for one profile together. This
identifier can be any combination of digits and characters. The identifier is case sensitive.
The last element of the key is the actual profile property.
A minimal definition of a profile in a properties file, could look like this:
profile.042.name=Local Postgres
profile.042.driverclass=org.postgresql.Driver
profile.042.url=jdbc:postgresql://localhost/postgres
profile.042.username=arthur
profile.042.password=dent
profile.042.driverjar=postgresql-9.4-1203.jdbc41.jar
In the above example the identifier 042 is used. The actual value is irrelevant. It is only important that all properties for
one profile have the same identifier. You can also use any other combination of digits and characters.
For each profile the following properties can be defined. The property name listed in the following table is the last
element for each key in the properties file.
Key
Value
name
url
username
password
drivername
driverjar
This specifies the jar file that contains the JDBC driver. If
driverjar is specified drivername is ignored.
235
Key
Value
If the filename is not specified as an absolute file, it is
assumed to be relative to the location of the properties file.
Either this parameter or drivername is mandatory.
Defining the driver jar in this way is not supported when
running in GUI mode. Drivers managed through the GUI
will always be saved in WbDrivers.xml
autocommit
fetchsize
236
Index
B
Batch files
connecting, 79
defining variables, 82
setting SQL Workbench/J configuration properties, 82
specify SQL script, 79
starting SQL Workbench/J, 79
C
Clipboard
export result to, 57
import data from, 59
Command line
connection profile, 20
JDBC connection, 21
parameters, 19
Configuration
advanced configuration properties, 214
change advanced configuration properties, 166
JDBC driver, 24
Connection profile, 27
autocommit, 28
connection URL, 28
create, 27
default fetch size, 28
delete, 27
extended properties, 29
separate connection, 30
separate session, 30
timeout, 28
Customize
DbExplorer DDL generation, 232
D
DB2
Column comments not displayed, 189
Problems, 188
Table comments not displayed, 189
DbExplorer
customize DDL generation, 232
prevent locking, 217
show all triggers, 204
DDL
Execute DDL statements, 44
DML
select values for foreign key columns, 68
E
Editing data
deleting rows, 55
deleting rows which are referenced through a foreign key, 69
237
F
Foreign keys
editing values of foreign key columns, 68
Update foreign key columns, 68
I
Import
clipboard, 59, 59
csv, 106
Excel, 106
flat files, 106
OpenOffice, 106
parameters, 106
result set, 59
tab separated, 106
XML, 106
XSLT, 106
J
Java runtime
Java not found on Windows, 16
JDBC Connection
connection properties, 29
JDBC driver
class name, 24
jar file, 24
library, 24
license file, 24
sample URL, 24
L
Liquibase
238
M
Microsoft SQL Server
Incorrect value for DATE columns, 186
JDBC URL properties, 188
Locking problems, 187
lock timeout for DbExplorer, 225
prevent locking in DbExplorer, 217
Problems, 186
Problem when running SHOWPLAN_ALL, 187
Sequence increments twice, 188
WbCopy memory problem, 188
WbExport memory problem, 188
Windows authentication, 187
MySQL
display table comments in DbExplorer, 186
problems, 185
O
Options dialog
dialog too small, 183
Oracle
autotrace, 72
check for pending transactions, 71
database comments, 184
DATE datatype, 203
DBMS_METADATA, 73
dbms_output, 165
No views displayed in DbExplorer, 184
Problems, 184
show system information, 72
tablespace information, 225, 225
Tables with underscores not treated correctly, 184
P
PostgreSQL
.pgpass, 70
check for pending transactions, 70
COPY, 70
libpq, 29
pgpass, 29
pgpass.conf, 70
Problems, 190
WbCopy memory problem, 190
WbExport memory problem, 190
Problems
Context menu not displayed, 183
create stored procedure, 182
create trigger, 182
dialog too small, 183
driver not found, 182
Excel export not possible, 183
GUI freezes, 183
IBM DB2, 188
239
S
SQL
change the statement delimiter, 44
Starting
Java runtime not found on Windows, 16
Statement delimiter
change the statement delimiter, 44
Stored procedures
create stored procedure, 44
T
Triggers
create trigger, 44
show all triggers in DbExplorer, 204
V
Variables
define on command line, 20
definition, 75
use in batch files, 82
W
Windows
Java not found, 16
using the launcher, 16
240