For Linux, Unix, and Windows: Command Reference
For Linux, Unix, and Windows: Command Reference
For Linux, Unix, and Windows: Command Reference
DB2 Version 9
for Linux, UNIX, and Windows
Command Reference
SC10-4226-00
DB2
DB2 Version 9
for Linux, UNIX, and Windows
Command Reference
SC10-4226-00
Before using this information and the product it supports, be sure to read the general information under Notices.
Edition Notice This document contains proprietary information of IBM. It is provided under a license agreement and is protected by copyright law. The information contained in this publication does not include any product warranties, and any statements provided in this manual should not be interpreted as such. You can order IBM publications online or through your local IBM representative. v To order publications online, go to the IBM Publications Center at www.ibm.com/shop/publications/order v To find your local IBM representative, go to the IBM Directory of Worldwide Contacts at www.ibm.com/ planetwide To order DB2 publications from DB2 Marketing and Sales in the United States or Canada, call 1-800-IBM-4YOU (426-4968). When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. Copyright International Business Machines Corporation 1993, 2006. All rights reserved. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
About This Book . . . . . . . . . . vii
Who Should Use this Book . How this Book is Structured . . . . . . . . . . . . . . . vii . vii db2gpmap - Get distribution map . . . . . . db2hc - Start health center . . . . . . . . . db2iauto - Auto-start instance . . . . . . . . db2iclus - Microsoft cluster server . . . . . . db2icrt - Create instance . . . . . . . . . db2idrop - Remove instance . . . . . . . . db2ilist - List instances . . . . . . . . . . db2imigr - Migrate instance . . . . . . . . db2inidb - Initialize a mirrored database . . . . db2inspf - Format inspect results . . . . . . db2isetup - Start instance creation interface . . . db2iupdt - Update instances . . . . . . . . db2jdbcbind - DB2 JDBC package binder . . . . db2ldcfg - Configure LDAP environment . . . . db2level - Show DB2 service level . . . . . . db2licm - License management tool . . . . . . db2listvolumes - Display GUIDs for all disk volumes . . . . . . . . . . . . . . . db2logsforrfwd - List logs required for rollforward recovery . . . . . . . . . . . . . . . db2look - DB2 statistics and DDL extraction tool db2ls - List installed DB2 products and features db2move - Database movement tool . . . . . db2mqlsn - MQ listener . . . . . . . . . . db2mscs - Set up Windows failover utility . . . db2mtrk - Memory tracker . . . . . . . . . db2nchg - Change database partition server configuration . . . . . . . . . . . . . db2ncrt - Add database partition server to an instance . . . . . . . . . . . . . . . db2ndrop - Drop database partition server from an instance . . . . . . . . . . . . . . . db2osconf - Utility for kernel parameter values . . db2pd - Monitor and troubleshoot DB2 database db2pdcfg - Configure DB2 database for problem determination behavior . . . . . . . . . . db2perfc - Reset database performance values . . db2perfi - Performance counters registration utility db2perfr - Performance monitor registration tool db2rbind - Rebind all packages . . . . . . . db2_recon_aid - Reconcile multiple tables . . . . db2relocatedb - Relocate database . . . . . . db2rfpen - Reset rollforward pending state . . . db2rspgn - Response file generator (Windows) . . db2sampl - Create sample database . . . . . . db2set - DB2 profile registry . . . . . . . . db2setup - Install DB2 . . . . . . . . . . db2sql92 - SQL92 compliant SQL statement processor . . . . . . . . . . . . . . . db2sqljbind - SQLJ profile binder . . . . . . . db2sqljcustomize - SQLJ profile customizer . . . db2sqljprint - SQLJ profile printer . . . . . . db2start - Start DB2 . . . . . . . . . . . db2stop - Stop DB2 . . . . . . . . . . . db2support - Problem analysis and environment collection tool . . . . . . . . . . . . . 116 117 118 119 122 125 127 128 130 132 133 135 138 140 141 142 144 145 146 155 157 165 169 173 176 178 180 181 184 223 226 228 229 230 232 235 240 241 242 245 248 249 252 259 270 271 272 273
iii
db2swtch - Switch default DB2 copy . . . . db2sync - Start DB2 synchronizer . . . . . db2systray - Start DB2 system tray . . . . . db2tapemgr - Manage log files on tape . . . db2tbst - Get table space state . . . . . . db2trc - Trace . . . . . . . . . . . . db2uiddl - Prepare unique index conversion to V5 semantics . . . . . . . . . . . . . db2undgp - Revoke execute privilege . . . . db2unins - Uninstall DB2 database product . . db2untag - Release container tag . . . . . . db2xprt - Format trap file . . . . . . . . doce_deinstall - Uninstall DB2 Information Center doce_install - Install DB2 Information Center . disable_MQFunctions. . . . . . . . . . enable_MQFunctions . . . . . . . . . . installFixPack - Update installed DB2 products . setup - Install DB2. . . . . . . . . . . sqlj - SQLJ translator . . . . . . . . . .
. . . . . . . . . . . . . . . . .
278 279 280 282 285 286 290 291 292 294 295 296 297 299 301 303 305 307
. 326 . 326
DESCRIBE . . . . . . . . . . . . . DETACH . . . . . . . . . . . . . DROP CONTACT . . . . . . . . . . DROP CONTACTGROUP . . . . . . . . DROP DATABASE . . . . . . . . . . DROP DBPARTITIONNUM VERIFY . . . . DROP TOOLS CATALOG . . . . . . . . ECHO . . . . . . . . . . . . . . EDIT . . . . . . . . . . . . . . . EXPORT . . . . . . . . . . . . . . FORCE APPLICATION . . . . . . . . . GET ADMIN CONFIGURATION . . . . . GET ALERT CONFIGURATION . . . . . . GET AUTHORIZATIONS . . . . . . . . GET CLI CONFIGURATION . . . . . . . GET CONNECTION STATE . . . . . . . GET CONTACTGROUP . . . . . . . . GET CONTACTGROUPS . . . . . . . . GET CONTACTS . . . . . . . . . . . GET DATABASE CONFIGURATION . . . . GET DATABASE MANAGER CONFIGURATION GET DATABASE MANAGER MONITOR SWITCHES . . . . . . . . . . . . . GET DESCRIPTION FOR HEALTH INDICATOR GET HEALTH NOTIFICATION CONTACT LIST GET HEALTH SNAPSHOT . . . . . . . GET INSTANCE . . . . . . . . . . . GET MONITOR SWITCHES . . . . . . . GET RECOMMENDATIONS FOR HEALTH INDICATOR. . . . . . . . . . . . . GET ROUTINE . . . . . . . . . . . GET SNAPSHOT . . . . . . . . . . . HELP . . . . . . . . . . . . . . . HISTORY . . . . . . . . . . . . . IMPORT . . . . . . . . . . . . . . INITIALIZE TAPE . . . . . . . . . . INSPECT . . . . . . . . . . . . . LIST ACTIVE DATABASES . . . . . . . LIST APPLICATIONS . . . . . . . . . LIST COMMAND OPTIONS . . . . . . . LIST DATABASE DIRECTORY . . . . . . LIST DATABASE PARTITION GROUPS . . . LIST DBPARTITIONNUMS . . . . . . . LIST DCS APPLICATIONS . . . . . . . LIST DCS DIRECTORY . . . . . . . . . LIST DRDA INDOUBT TRANSACTIONS . . . LIST HISTORY . . . . . . . . . . . LIST INDOUBT TRANSACTIONS . . . . . LIST NODE DIRECTORY . . . . . . . . LIST ODBC DATA SOURCES . . . . . . . LIST PACKAGES/TABLES . . . . . . . LIST TABLESPACE CONTAINERS . . . . . LIST TABLESPACES . . . . . . . . . . LIST UTILITIES . . . . . . . . . . . LOAD . . . . . . . . . . . . . . LOAD QUERY . . . . . . . . . . . . MIGRATE DATABASE . . . . . . . . . PING . . . . . . . . . . . . . . . PRECOMPILE . . . . . . . . . . . . PRUNE HISTORY/LOGFILE . . . . . . . PUT ROUTINE . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
417 423 424 425 426 428 429 431 432 433 439 441 443 449 451 453 454 455 456 457 463
. 468 471 473 . 474 . 477 . 478 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481 485 487 492 493 494 511 512 518 520 522 523 526 528 529 531 533 535 538 541 544 545 548 550 555 557 576 579 581 583 607 609
iv
Command Reference
QUERY CLIENT . . . . . . . . . . . QUIESCE . . . . . . . . . . . . . QUIESCE TABLESPACES FOR TABLE . . . . QUIT . . . . . . . . . . . . . . . REBIND . . . . . . . . . . . . . . RECONCILE . . . . . . . . . . . . RECOVER DATABASE . . . . . . . . . REDISTRIBUTE DATABASE PARTITION GROUP REFRESH LDAP . . . . . . . . . . . REGISTER . . . . . . . . . . . . . REGISTER XMLSCHEMA . . . . . . . . REGISTER XSROBJECT . . . . . . . . . REORG INDEXES/TABLE . . . . . . . . REORGCHK . . . . . . . . . . . . RESET ADMIN CONFIGURATION . . . . . RESET ALERT CONFIGURATION . . . . . RESET DATABASE CONFIGURATION . . . RESET DATABASE MANAGER CONFIGURATION . . . . . . . . . . RESET MONITOR . . . . . . . . . . RESTART DATABASE . . . . . . . . . RESTORE DATABASE . . . . . . . . . REWIND TAPE . . . . . . . . . . . ROLLFORWARD DATABASE . . . . . . RUNCMD . . . . . . . . . . . . . RUNSTATS . . . . . . . . . . . . . SET CLIENT . . . . . . . . . . . . SET RUNTIME DEGREE . . . . . . . . SET TABLESPACE CONTAINERS . . . . . SET TAPE POSITION . . . . . . . . . SET UTIL_IMPACT_PRIORITY . . . . . . SET WRITE . . . . . . . . . . . . . START DATABASE MANAGER . . . . . . START HADR . . . . . . . . . . . . STOP DATABASE MANAGER . . . . . . STOP HADR . . . . . . . . . . . . TAKEOVER HADR . . . . . . . . . . TERMINATE . . . . . . . . . . . . UNCATALOG DATABASE . . . . . . . UNCATALOG DCS DATABASE . . . . . . UNCATALOG LDAP DATABASE . . . . . UNCATALOG LDAP NODE . . . . . . . UNCATALOG NODE . . . . . . . . . UNCATALOG ODBC DATA SOURCE . . . . UNQUIESCE . . . . . . . . . . . . UPDATE ADMIN CONFIGURATION . . . . UPDATE ALERT CONFIGURATION . . . . UPDATE ALTERNATE SERVER FOR DATABASE UPDATE ALTERNATE SERVER FOR LDAP DATABASE . . . . . . . . . . . . . UPDATE CLI CONFIGURATION . . . . . UPDATE COMMAND OPTIONS . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
611 612 615 618 619 623 627 633 636 637 640 642 644 654 663 665 667 669 671 673 675 690 691 701 702 716 719 721 723 724 726 728 734 736 739 741 744 745 747 749 751 752 753 754 756 758 762
UPDATE CONTACT . . . . . . . . . . UPDATE CONTACTGROUP . . . . . . . UPDATE DATABASE CONFIGURATION . . UPDATE DATABASE MANAGER CONFIGURATION . . . . . . . . . . UPDATE HEALTH NOTIFICATION CONTACT LIST . . . . . . . . . . . . . . . UPDATE HISTORY . . . . . . . . . . UPDATE LDAP NODE . . . . . . . . . UPDATE MONITOR SWITCHES . . . . .
Chapter 4. Using command line SQL statements and XQuery statements . . 785 Appendix A. How to read the syntax diagrams . . . . . . . . . . . . . 793 Appendix B. Naming conventions . . 797
Contents
vi
Command Reference
vii
viii
Command Reference
System Commands
A description of the parameters available to the command. Usage notes: Other information. Related reference: A cross-reference to related information.
Command Reference
Command parameters: -h/-? -on Displays help information. When this option is specified, all other options are ignored, and only the help information is displayed. Enables autostarting of the DB2 administration server. The next time the system is restarted, the DB2 administration server will be started automatically. Disables autostarting of the DB2 administration server. The next time the system is restarted, the DB2 administration server will not be started automatically.
-off
Related tasks: v Configuring the DB2 administration server (DAS) in Administration Guide: Implementation v Creating a DB2 administration server (DAS) in Administration Guide: Implementation v Starting and stopping the DB2 administration server (DAS) in Administration Guide: Implementation
Command parameters: -u DASuser DASuser is the user ID under which the DAS will be created. The DAS will be created under the /home/DASuser/das directory. -d Enters debug mode, for use with DB2 Service.
Related tasks: v Configuring the DB2 administration server (DAS) in Administration Guide: Implementation v Creating a DB2 administration server (DAS) in Administration Guide: Implementation v Starting and stopping the DB2 administration server (DAS) in Administration Guide: Implementation
Command Reference
Command parameters: -d Enters debug mode, for use with DB2 Service.
Usage notes: v The dasdrop command is located in the DB2DIR/instance directory, where DB2DIR is the location where the current version of the DB2 database product is installed. Related tasks: v Removing the DB2 administration server (DAS) in Administration Guide: Implementation
Command parameters: For Linux and UNIX systems -d Enters debug mode, for use with DB2 Service.
-ppath override Indicates that the DAS profile should be moved as well. path override is a user specified path to be used instead of the default DAS profile path. Examples: On Linux and UNIX systems
DB2DIR/instance/dasmig
Command Reference
Related tasks: v Configuring the DB2 administration server (DAS) in Administration Guide: Implementation v Migrating the DB2 Administration Server (DAS) in Migration Guide Related reference: v dasupdt - Update DAS on page 8
Command parameters: For Linux and UNIX-based systems -d -D -h or -? Displays usage information. For Windows operating systems -h Displays usage information. Sets the debug mode, which is used for problem analysis. Moves the DAS from a higher code level on one path to a lower code level installed on another path.
-p path override Indicates that the DAS profile should be moved as well. path override is a user specified path to be used instead of the default DAS profile path.
Command Reference
If a DAS is running in one DB2 installation path and you want to move the DAS to another installation path at a lower level (but the two installation paths are at the same version of DB2 database system), issue the following command from the installation path at the lower level:
dasupdt -D
Related tasks: v Configuring the DB2 administration server (DAS) in Administration Guide: Implementation v Updating a DB2 administration server (DAS) configuration for discovery in Administration Guide: Implementation v Updating the DB2 administration server (DAS) on UNIX in Administration Guide: Implementation
-l log-file
-t trace-file
-h -?
Command parameters: -F feature-name Specifies the removal of one feature. To indicate uninstallation of multiple features, specify this parameter multiple times. For example, -F feature1 -F feature2. -a Removes all installed DB2 products in the current location.
-l log-file Specifies the log file. The default log file is /tmp/db2_deinstall.log$$, where $$ is the process ID. -t trace-file Turns on the debug mode. The debug information is written to the file name specified as trace-file. -h/-? Displays usage information.
Examples: v To uninstall all the DB2 database products that are installed in a location (DB2DIR), issue the db2_deinstall located at DB2DIR/install directory:
DB2DIR/install/db2_deinstall -a
Related reference: v db2ls - List installed DB2 products and features on page 155
10
Command Reference
-n
-L language
-l log-file
-t trace-file
-h -?
Command parameters: -b install-path Specifies the path where the DB2 product is to be installed. install-path must be a full path name and its maximum length is limited to 128 characters. This parameter is mandatory when the -n parameter is specified. -p productID Specifies the DB2 product to be installed. productID does not require DB2 as a prefix. This parameter is case insensitive and is mandatory when the -n parameter is specified. -c image-location Specifies the product image location. To indicate multiple image locations, specify this parameter multiple times. For example, -c CD1 -c CD2. This parameter is only mandatory if the -n parameter is specified, your install requires more than one CD, and your images are not set up for automatic discovery. Otherwise, you are prompted for the location of the next CD at the time it is needed. For details on automatic discovery associated with multiple installation images, see Multiple CD installation (Linux and UNIX). -n Specifies non-interactive mode.
-L language Specifies national language support. You can install a non-English version of a DB2 product. However, you must run this command from the product CD, not the National Language pack CD. By default, English is always installed, therefore, English does not need to be specified. When more than one language is required, this parameter is mandatory. To indicate multiple languages, specify this parameter multiple times. For example, to install both French and German, specify -L FR -L DE. This parameter is case insensitive.
11
Examples: v To install from an image in /mnt/cdrom, and to be prompted for all needed input, issue: To install DB2 Enterprise Server Edition from an image in /mnt/cdrom, issue:
cd /mnt/cdrom ./db2_install
To install DB2 Enterprise Server Edition to /db2/newlevel, from an image in /mnt/cdrom, non-interactively in English, issue:
cd /mnt/cdrom ./db2_install -p ese -b /db2/newlevel -n
Related tasks: v Installing a DB2 product using the db2_install or doce_install command (Linux and UNIX) in Installation and Configuration Supplement
12
Command Reference
START STOP /FORCE CREATE /USER: user-account /PASSWORD: user-password DROP SETID user-account user-password SETSCHEDID sched-user sched-password -? -q
Command parameters: START Start the DAS. STOP /FORCE Stop the DAS. The force option is used to force the DAS to stop, regardless of whether or not it is in the process of servicing any requests. CREATE /USER: user-account /PASSWORD: user-password Create the DAS. If a user name and password are specified, the DAS will be associated with this user account. If the specified values are not valid, the utility returns an authentication error. The pecified user account must be a valid SQL identifier, and must exist in the security database. It is recommended that a user account be specified to ensure that all DAS functions can be accessed. To create a DAS on UNIX operating systems, use the dascrt command. DROP Deletes the DAS. To drop a DAS on UNIX operating systems you must use the dasdrop command.
Chapter 1. System Commands
13
Related tasks: v Tools catalog database and DB2 administration server (DAS) scheduler setup and configuration in Administration Guide: Implementation Related reference: v dascrt - Create a DB2 administration server on page 4 v dasdrop - Remove a DB2 administration server on page 5
14
Command Reference
db2-object-options:
QUERY-options EXTRACT-options DELETE-options VERIFY-options
COMPRLIB decompression-library
COMPROPTS decompression-options
VERBOSE
DATABASE DB
database_name
DBPARTITIONNUM db-partition-number
PASSWORD password
NODENAME node_name
OWNER owner
WITHOUT PROMPTING
QUERY-options:
QUERY TABLESPACE FULL NONINCREMENTAL INCREMENTAL DELTA SHOW INACTIVE
CHAIN n
15
SHOW INACTIVE
SUBSET
TAKEN AT
timestamp
sn1
AND
sn2
CHAIN
DELETE-options:
DELETE
TABLESPACE FULL
KEEP n OLDER
VERIFY-options:
VERIFY verify-options TABLESPACE FULL LOADCOPY NONINCREMENTAL INCREMENTAL DELTA SHOW INACTIVE TAKEN AT timestamp
verify-options:
ALL CHECK DMS HEADER LFH TABLESPACES SGF HEADERONLY TABLESPACESONLY SGFONLY OBJECT PAGECOUNT
access-control-options:
GRANT REVOKE
ON ON
FOR FOR
QUERYACCESS
ALL DATABASE DB
database_name
PASSWORD
password
Command parameters:
16
Command Reference
CHECK Displays results of checkbits and checksums. DMS Displays information from headers of DMS table space data pages.
HEADER Displays the media header information. HEADERONLY Displays the same information as HEADER but only reads the 4 K media header information from the beginning of the image. It does not validate the image. LFH Displays the log file header (LFH) data.
OBJECT Displays detailed information from the object headers. PAGECOUNT Displays the number of pages of each object type found in the image. SGF Displays the automatic storage paths in the image.
SGFONLY Displays only the automatic storage paths in the image but does not validate the image. TABLESPACES Displays the table space details, including container information, for the table spaces in the image. TABLESPACESONLY Displays the same information as TABLESPACES but does not validate the image. TABLESPACE Includes only table space backup images. FULL Includes only full database backup images.
NONINCREMENTAL Includes only non-incremental backup images. INCREMENTAL Includes only incremental backup images. DELTA Includes only incremental delta backup images.
Chapter 1. System Commands
17
Notes: 1. <startPage> is an object page number that is object-relative. For DMS table spaces:
D <tbspID> <objType> <startPage> <numPages>
Notes: 1. <objType> is only needed if verifying DMS load copy images. 2. <startPage> is an object page number that is pool-relative. For log files:
L <log num> <startPos> <numPages>
The default output file is extractPage.out. You can override the default output file name by setting the DB2EXTRACTFILE environment variable to a full path. TAKEN AT timestamp Specifies a backup image by its time stamp. KEEP n Deactivates all objects of the specified type except for the most recent n by time stamp. OLDER THAN timestamp or n days Specifies that objects with a time stamp earlier than timestamp or n days will be deactivated. COMPRLIB decompression-library Indicates the name of the library to be used to perform the decompression. The name must be a fully qualified path referring to a file on the server. If this parameter is not specified, DB2 will attempt to use the library stored
18
Command Reference
19
The following is sample output from the command db2adutl query issued following the backup operation:
Query for database RAWSAMPL Retrieving FULL DATABASE BACKUP information. 1 Time: 20031209184403, Oldest log: S0000050.LOG, Sessions: 1 Retrieving INCREMENTAL DATABASE BACKUP information. No INCREMENTAL DATABASE BACKUP images found for RAWSAMPL Retrieving DELTA DATABASE BACKUP information. No DELTA DATABASE BACKUP images found for RAWSAMPL Retrieving TABLESPACE BACKUP information. No TABLESPACE BACKUP images found for RAWSAMPL Retrieving INCREMENTAL TABLESPACE BACKUP information. No INCREMENTAL TABLESPACE BACKUP images found for RAWSAMPL Retrieving DELTA TABLESPACE BACKUP information. No DELTA TABLESPACE BACKUP images found for RAWSAMPL Retrieving LOCAL COPY information. No LOCAL COPY images found for RAWSAMPL Retrieving log archive information. Log file: S0000050.LOG, Chain Num: Taken at 2003-12-09-18.46.13 Log file: S0000051.LOG, Chain Num: Taken at 2003-12-09-18.46.43 Log file: S0000052.LOG, Chain Num: Taken at 2003-12-09-18.47.12 Log file: S0000053.LOG, Chain Num: Taken at 2003-12-09-18.50.14 Log file: S0000054.LOG, Chain Num: Taken at 2003-12-09-18.50.56 Log file: S0000055.LOG, Chain Num: Taken at 2003-12-09-18.52.39 0, DB Partition Number: 0, 0, DB Partition Number: 0, 0, DB Partition Number: 0, 0, DB Partition Number: 0, 0, DB Partition Number: 0, 0, DB Partition Number: 0,
2. The following is sample output from the command db2adutl delete full taken at 20031209184503 db rawsampl
Query for database RAWSAMPL Retrieving FULL DATABASE BACKUP information. Taken at: 20031209184503 DB Partition Number: 0 Do you want to delete this file (Y/N)? y Are you sure (Y/N)? y Retrieving INCREMENTAL DATABASE BACKUP information. No INCREMENTAL DATABASE BACKUP images found for RAWSAMPL Retrieving DELTA DATABASE BACKUP information. No DELTA DATABASE BACKUP images found for RAWSAMPL Sessions: 1
The following is sample output from the command db2adutl query issued following the operation that deleted the full backup image. Note the timestamp for the backup image.
Query for database RAWSAMPL
20
Command Reference
3. The following is sample output from the command db2adutl queryaccess for all
Node User Database Name type ------------------------------------------------------------------bar2 jchisan sample B <all> <all> test B ------------------------------------------------------------------Access Types: B Backup images L Logs A - both
Usage Notes: One parameter from each group below can be used to restrict what backup images types are included in the operation: Granularity: v FULL - include only database backup images. v TABLESPACE - include only table space backup images. Cumulativeness: v NONINCREMENTAL - include only non-incremental backup images. v INCREMENTAL - include only incremental backup images. v DELTA - include only incremental delta backup images. Compatibilities: For compatibility with versions earlier than Version 8: v The keyword NODE can be substituted for DBPARTITIONNUM.
Chapter 1. System Commands
21
22
Command Reference
-a userid /passwd
-m
advise-type
-x
-u
-l disk-limit
-n
-t max-advise-time
-k
-f
-r
-n
schema-name
-q schema-name
-b tablespace-name
-c tablespace-name
-h
-p
-o outfile
-nogen
-delim char
Command parameters:
23
The frequency can be updated any number of times in the file. This option cannot be specified with the -g, -s, -qp, or -w options. -g Specifies the retrieval of the SQL statements from a dynamic SQL snapshot. If combined with the -p command parameter, the SQL statements are kept in the ADVISE_WORKLOAD table. This option cannot be specified with the -i, -s, -qp, or -w options. Specifies that the workload is coming from Query Patroller. The start-time and end-time options are timestamps used to check against the time_completed field of the DB2QP.TRACK_QUERY_INFO table. If no start-time and end-time timestamps are given, all rows are give D (for done) in the completion_status column of the table. If only start-time is given, the rows returned are those with TIME_COMPLETED greater than or equal to the start-time value. In addition, if the end-time value is given, the rows returned are also restricted by TIME_COMPLETED less than or equal to the end-time value. This option cannot be used with the -w, -s, -i, or -g options.
-qp
-a userid/passwd Name and password used to connect to the database. The slash (/) must be included if a password is specified. A password should not be specified if the -x option is specified. -m advise-type Specifies the type of recommendation the advisor will return. Any combination of I, M, C, and P (in upper or lower case) can be specified. For example, db2advis -m PC will recommend partitioning and MDC tables. If -m P or -m M are used in a partitioned database environment, the advise_partition table is populated with the final partition recommendation. The choice of possible values are: I Recommends new indexes. This is the default.
24
Command Reference
P -x -u
Specifies that the password will be read from the terminal or through user input. Specifies that the advisor will consider the recommendation of deferred MQTs. Incremental MQTs will not be recommended. When this option is specified, comments in the DDL CLP script indicate which of the MQTs could be converted to immediate MQTs. If immediate MQTs are recommended in a partitioned database environment, the default distribution key is the implied unique key for the MQT.
-l disk-limit Specifies the number of megabytes available for all recommended indexes and materialized views in the existing schema. Specify -1 to use the maximum possible size. The default value is 20% of the total database size. -t max-advise-time Specifies the maximum allowable time, in minutes, to complete the operation. If no value is specified for this option, the operation will continue until it is completed. To specify an unlimited time enter a value of zero. The default is zero. -k Specifies to what degree the workload will be compressed. Compression is done to allow the advisor to reduce the complexity of the advisors execution while achieving similar results to those the advisor could provide when the full workload is considered. HIGH indicates the advisor will concentrate on a small subset of the workload. MED indicates the advisor will concentrate on a medium-sized subset of the workload. LOW indicates the advisor will concentrate on a larger subset of the workload. OFF indicates that no compression will occur. The default is MED. Drops previously existing simulated catalog tables. Specifies that detailed statistics should be used for the virtual MQTs and for the partitioning selection. If this option is not specified, the default is to use optimizer statistics for MQTs. Although the detailed statistics might be more accurate, the time to derive them will be significant and will cause the db2advis execution time to be greater. The -r command parameter uses sampling to obtain relevant statistics for MQTs and partitioning. For MQTs, when the sample query either fails or returns no rows, the optimizer estimates are used.
-f -r
-n schema-name Specifies the qualifying name of simulation catalog tables, and the qualifier for the new indexes and MQTs. The default schema name is the callers user ID, except for catalog simulation tables where the default schema name is SYSTOOLS. The default is for new indexes to inherit the schema name of the indexs base. -q schema-name Specifies the qualifying name of unqualified names in the workload. It
25
-o outfile Saves the script to create the recommended objects in outfile. -nogen Indicates that generated columns are not to be included in multidimensional clustering recommendations. -delim char Indicates the statement delimiter character <char> in a workload file input. Default is ; Examples: 1. In the following example, the utility connects to database PROTOTYPE, and recommends indexes for table ADDRESSES without any constraints on the solution:
db2advis -d prototype -s "select * from addresses a where a.zip in (93213, 98567, 93412) and (company like IBM% or company like %otus)"
2. In the following example, the utility connects to database PROTOTYPE, and recommends indexes that will not exceed 53MB for queries in table ADVISE_WORKLOAD. The workload name is equal to production. The maximum allowable time for finding a solution is 20 minutes.
db2advis -d prototype -w production -l 53 -t 20
3. In the following example, the input file db2advis.in contains SQL statements and a specification of the frequency at which each statement is to be executed:
--#SET FREQUENCY 100 SELECT COUNT(*) FROM EMPLOYEE; SELECT * FROM EMPLOYEE WHERE LASTNAME=HAAS;
26
Command Reference
The utility connects to database SAMPLE, and recommends indexes for each table referenced by the queries in the input file. The maximum allowable time for finding a solution is 5 minutes:
db2advis -d sample -f db2advis.in -t 5
4. In the following example, MQTs are created in table space SPACE1 and the simulation table space is SPACE2. The qualifying name for unqualified names in the workload is SCHEMA1, and the schema name in which the new MQTs will be recommended is SCHEMA2. The workload compression being used is HIGH and the disk space is unlimited. Sample statistics are used for the MQTs. Issuing the following command will recommend MQTs and, in a partitioned database environment, indexes and partitioning will also be recommended.
db2advis -d prototype -w production -l -1 -m M -b space1 -c space2 -k HIGH -q schema1 -n schema2 -r
To get the recommended MQTs, as well as indexes, partitioning and MDCs on both MQT and base tables, issue the command specifying a value of IMCP for the -m option as follows:
db2advis -d prototype -w production -l -1 -m IMCP -b space1 -c space2 -k HIGH -q schema1 -n schema2 -r
Usage notes: Because these features must be set up before you can run the DDL CLP script, database partitioning, multidimensional clustering, and clustered index recommendations are commented out of the DDL CLP script that is returned. It is up to you to transform your tables into the recommended DDL. One example of doing this is to use the ALTER TABLE stored procedure but there are restrictions associated with it in the same way the RENAME command is restricted. For dynamic SQL statements, the frequency with which statements are executed can be obtained from the monitor as follows: 1. Issue
db2 reset monitor for database <database-alias>
If the -p parameter is used with the -g parameter, the dynamic SQL statements obtained will be placed in the ADVISE_WORKLOAD table with a generated workload name that contains a timestamp. The default frequency for each SQL statement in a workload is 1, and the default importance is also 1. The generate_unique() function assigns a unique identifier to the statement, which can be updated by the user to be a more meaningful description of that SQL statement. Any db2advis error information can also be found in the db2diag.log.
27
28
Command Reference
29
describe extract Audit Extraction flush prune all date YYYYMMDDHH pathname Path_with_temp_space start stop
Audit Configuration:
scope
status
errortype
audit normal
Audit Extraction:
file output-file delasc delimiter load-delimiter category
database database-name
status
success failure
Command parameters: configure This parameter allows the modification of the db2audit.cfg configuration file in the instances security subdirectory. Updates to this file can occur even when the instance is shut down. Updates occurring when the instance is active dynamically affect the auditing being done by DB2 database across all database partitions. The configure action on the configuration file causes the creation of an audit record if the audit facility has been started and the audit category of auditable events is being audited. The following are the possible actions on the configuration file: v RESET. This action causes the configuration file to revert to the initial configuration (where SCOPE is all of the categories except CONTEXT,
30
Command Reference
31
This parameter forces any pending audit records to be written to the audit log. Also, the audit state is reset in the engine from unable to log to a state of ready to log if the audit facility is in an error state.
prune This parameter allows for the deletion of audit records from the audit log. If the audit facility is active and the audit category of events has been specified for auditing, then an audit record will be logged after the audit log is pruned. The following are the possible options that can be used when pruning: v ALL. All of the audit records in the audit log are to be deleted. v DATE yyyymmddhh. The user can specify that all audit records that occurred on or before the date/time specified are to be deleted from the audit log. The user may optionally supply a
pathname
which the audit facility will use as a temporary space when pruning the audit log. This temporary space allows for the pruning of the audit log when the disk it resides on is full and does not have enough space to allow for a pruning operation. start This parameter causes the audit facility to begin auditing events based on the contents of the db2audit.cfg file. In a partitioned DB2 database instance, auditing will begin on all database partitions when this clause is specified. If the audit category of events has been specified for auditing, then an audit record will be logged when the audit facility is started. This parameter causes the audit facility to stop auditing events. In a partitioned DB2 database instance, auditing will be stopped on all database partitions when this clause is specified. If the audit category of events has been specified for auditing, then an audit record will be logged when the audit facility is stopped.
stop
Usage Notes: v The audit facility must be stopped and started explicitly. When starting, the audit facility uses existing audit configuration information. Since the audit facility is independent of the DB2 database server, it will remain active even if the instance is stopped. In fact, when the instance is stopped, an audit record may be generated in the audit log. v Ensure that the audit facility has been turned on by issuing the db2audit start command before using the audit utilities. v There are different categories of audit records that may be generated. In the description of the categories of events available for auditing (below), you should notice that following the name of each category is a one-word keyword used to identify the category type. The categories of events available for auditing are:
32
Command Reference
33
-m parameters_file
-t delcol
-r result_file ,summary_file -c
on off
-i
-g
on off
-w
32768 col_width
-time
on off
-cli cache-size
-msw
switches
hold on off
RR RS CS UR
34
Command Reference
-o options -v
off on
-s
on off
-q
off on del
-l x
-h
Command parameters: -ddbname An alias name for the database against which SQL statements and XQuery statements are to be applied. If this option is not specified, the value of the DB2DBDFT environment variable is used. -f file_name Name of an input file containing SQL statements and XQuery statements. The default is standard input. Identify comment text by adding two hyphens in front of the comment text, that is, -- <comment>. All text following the two hyphens until the end of the line is treated as a comment. Strings delimited with single or double quotes may contain two adjacent hyphens, and are treated as string constants rather than comments. To include a comment in the output, mark it as follows: --#COMMENT <comment>. A block is a group of SQL statements and XQuery statements that are treated as one. By default, information is collected for all of the statements in the block at once, rather than one at a time. Identify the beginning of a block of queries as follows: --#BGBLK. Identify the end of a block of queries as follows: --#EOBLK. Blocks of queries can be included in a repeating loop by specifying a repeat count when defining the block, as follows: --#BGBLK [repeat_count]. Statements in the block will be prepared only on the first iteration of the loop. You can use #PARAM directives or a parameter file to specify the parameter values for a given statement and a given iteration of a block. See the -m option below for details. Specify one or more control options as follows: --#SET <control option> <value>. Valid control options are: ROWS_FETCH Number of rows to be fetched from the answer set. Valid values are -1 to n. The default value is -1 (all rows are to be fetched). ROWS_OUT Number of fetched rows to be sent to output. Valid values are -1 to n. The default value is -1 (all fetched rows are to be sent to output). PERF_DETAIL perf_detail Specifies the level of performance information to be returned. Valid values are: 0 1 2 Do not return any timing information or monitoring snapshots. Return elapsed time only. Return elapsed time and a snapshot for the application.
35
The default value is 1. A value >1 is only valid on DB2 Version 2 and DB2 database servers, and is not currently supported on host machines. ERROR_STOP Specifies whether or not db2batch should stop running when a non-critical error occurs. Valid values are: no yes Continue running when a non-critical error occurs. This is the default option. Stop running when a non-critical error occurs.
DELIMITER A one- or two-character end-of-statement delimiter. The default value is a semicolon (;). SLEEP Number of seconds to sleep. Valid values are 1 to n. PAUSE Prompts the user to continue. SNAPSHOT snapshot Specifies the monitoring snapshots to take. See the mss option for the snapshots that can be taken. TIMESTAMP Generates a time stamp. TIMING Print timing information. Valid values are: ON OFF Timing information is printed. This is the default. Timing information is not printed.
-a userid/passwd Specifies the user ID and password used to connect to the database. The slash (/) must be included. -m parameters_file Specifies an input file with parameter values to bind to the SQL statement parameter markers before executing a statement. The default is to not bind parameters.
36
Command Reference
Each parameter is defined like a SQL constant, and is separated from other parameters by whitespace. Non-delimited text represents a number, plain delimited () text represents a single byte character string, x or X prefixed text enclosed in single quotes () represents a binary string encoded as pairs of hex digits, g, G, n, or N prefixed text enclosed in single quotes () represents a graphic string composed of double byte characters, and NULL (case insensitive) represents a null value. To specify XML data, use delimited () text, such as <last>Brown</last>. Parameter Input File Format: Line X lists the set of parameters to supply to the Xth SQL statement that is executed in the input file. If blocks of statements are not repeated, then this corresponds to the Xth SQL statement that is listed in the input file. A blank line represents no parameters for the corresponding SQL statement. The number of parameters and their types must agree with the number of parameters and the types expected by the SQL statement. Parameter Directive Format:
--#PARAM [single | start:end | start:step:end] [...]
Each parameter directive specifies a set of parameter values from which one random value is selected for each execution of the query. Sets are composed of both single parameter values and parameter value ranges. Parameter value ranges are specified by placing a colon (:) between two valid parameter values, with whitespace being an optional separator. A third parameter value can be placed between the start and end values to be used as a step size which overrides the default. Each parameter range is the equivalent of specifying the single values of start, start+step, start+2*step, ... start+n*step where n is chosen such that start+n*step >= end but start+(n+1)*step > end. While parameter directives can be used to specify sets of values for any type of parameter (even NULL), ranges are only supported on numerical parameter values (integers and decimal numbers). -t delcol Specifies a single character column separator. Specify -t TAB for a tab column delimiter or -t SPACE for a space column delimiter. By default, a space is used when the -q on option is set, and a comma is used when the -q del option is set. -r result_file An output file that will contain the query results. If the optional summary_file is specified, it will contain the summary table. The default is standard output.
Chapter 1. System Commands
37
complete Measure the elapsed time to run each statement where the prepare, execute, and fetch times are reported separately. -g Specifies whether timing is reported by block or by statement. Valid values are: on off -w A snapshot is taken for the entire block and only block timing is reported in the summary table. This is the default. A snapshot is taken and summary table timing is reported for each statement executed in the block.
Specifies the maximum column width of the result set, with an allowable range of 0 to 2G. Data is truncated to this width when diplayed, unless the data cannot be truncated. You can increase this setting to eliminate the warning CLI0002W and get a more accurate fetch time. The default maximum width is 32768 columns. Specifies whether or not to report the timing information. Valid values are: on off Timing is reported. This is the default. Timing is not reported.
-time
-cli
Embedded dynamic SQL mode, previously the default mode for the db2batch, command is no longer supported. This command only runs in CLI mode. The -cli option exists for backwards compatibility. Specifying it (including the optional cache-size argument) will not cause errors, but will be ignored internally.
-msw switch Sets the state of each specified monitor switch. You can specify any of the following: uow, statement, table, bufferpool, lock, sort, and timestamp. The special switch all sets all of the above switches. For each switch that you specify you must choose one of: hold on off The state of the switch is unchanged. This is the default. The switch is turned on. The switch is turned off.
-mss snapshot Specifies the monitoring snapshots that should be taken after each statement or block is executed, depending on the -g option. More than one snapshot can be taken at a time, with the information from all snapshots combined into one large table before printing. The possible snapshots are: applinfo_all, dbase_applinfo, dcs_applinfo_all, db2, dbase, dbase_all, dcs_dbase, dcs_dbase_all, dbase_remote, dbase_remote_all, agent_id, dbase_appls, appl_all, dcs_appl_all, dcs_appl_handle, dcs_dbase_appls,
38
Command Reference
-o options Control options. Valid options are: f rows_fetch Number of rows to be fetched from the answer set. Valid values are -1 to n. The default value is -1 (all rows are to be fetched). r rows_out Number of fetched rows to be sent to output. Valid values are -1 to n. The default value is -1 (all fetched rows are to be sent to output). p perf_detail Specifies the level of performance information to be returned. Valid values are: 0 1 2 3 4 Do not return any timing information or monitoring snapshots. Return elapsed time only. Return elapsed time and a snapshot for the application. Return elapsed time, and a snapshot for the database manager, the database, and the application. Return a snapshot for the database manager, the database, the application, and the statement (the latter is returned only if autocommit is off, and single statements, not blocks of statements, are being processed). Return a snapshot for the database manager, the database, the application, and the statement (the latter is returned only if autocommit is off, and single statements, not blocks of statements, are being processed). Also return a snapshot for the buffer pools, table spaces and FCM (an FCM snapshot is only available in a multi-database-partition environment).
39
s error_stop Specifies whether or not db2batch should stop running when a non-critical error occurs. Valid values are: no yes -v -s Continue running when a non-critical error occurs. This is the default option. Stop running when a non-critical error occurs.
Verbose. Send information to standard error during query processing. The default value is off. Summary Table. Provide a summary table for each query or block of queries, containing elapsed time with arithmetic and geometric means, the rows fetched, and the rows output. Query output. Valid values are: off on del Output the query results and all associated information. This is the default. Ouput only query results in non-delimited format. Output only query results in delimited format.
-q
-l x
Specifies the termination character (delimiter). The delimiter can be 1 or 2 characters. The default is a semi-colon (;).
-h, -u, -? Displays help information. When this option is specified, all other options are ignored, and only the help information is displayed. Examples: 1. The following is sample output from the command db2batch -d crystl -f update.sql
* Timestamp: Thu Feb 02 2006 10:06:13 EST --------------------------------------------* SQL Statement Number 1: create table demo (c1 bigint, c2 double, c3 varchar(8)); * Elapsed Time is: 0.101091 seconds
40
Command Reference
--------------------------------------------* SQL Statement Number 2: insert into demo values (-9223372036854775808, -0.000000000000005, demo); * Elapsed Time is: 0.002926 seconds
--------------------------------------------* SQL Statement Number 3: insert into demo values (9223372036854775807, 0.000000000000005, demodemo); * Elapsed Time is: 0.005676 seconds
--------------------------------------------* SQL Statement Number 4: select * from demo; C1 --------------------9223372036854775808 9223372036854775807 C2 ----------------------5.00000000000000E-015 +5.00000000000000E-015 C3 -------demo demodemo
--------------------------------------------* SQL Statement Number 5: drop table demo; * Elapsed Time is: * Summary Table: Type Number Repetitions Total Time (s) Min Time (s) Max Time (s) --------- ----------- ----------- -------------- -------------- -------------Statement 1 1 0.101091 0.101091 0.101091 Statement 2 1 0.002926 0.002926 0.002926 Statement 3 1 0.005676 0.005676 0.005676 Statement 4 1 0.001104 0.001104 0.001104 Statement 5 1 0.176135 0.176135 0.176135 Arithmetic Mean Geometric Mean Row(s) Fetched Row(s) Output --------------- -------------- -------------- ------------0.101091 0.101091 0 0 0.002926 0.002926 0 0 0.005676 0.005676 0 0 0.001104 0.001104 2 2 0.176135 0.176135 0 0 * Total Entries: 5 * Total Time: 0.286932 seconds * Minimum Time: 0.001104 seconds * Maximum Time: 0.176135 seconds * Arithmetic Mean Time: 0.057386 seconds * Geometric Mean Time: 0.012670 seconds --------------------------------------------* Timestamp: Thu Feb 02 2006 10:06:13 EST 0.176135 seconds
41
42
Command Reference
Command parameters: -h -b -s -v filespec Name of the bind file whose contents are to be displayed. Related concepts: v Binding embedded SQL packages to a database in Developing Embedded SQL Applications v Displaying the contents of a bind file using the db2bfd tool in Troubleshooting Guide Display help information. When this option is specified, all other options are ignored, and only the help information is displayed. Display the bind file header. Display the SQL statements. Display the host variable declarations.
43
, -tfilter filter
Command parameters: -t Turns on the GUI trace and sends the output to a console window. On Windows operating systems, the db2ca command does not have a console window. Therefore, this option has no effect on Windows operating systems.
-tf filename Turns on the GUI trace and saves the output of the trace to the specified file. The output file is saved to <DB2 install path>\sqllib\tools on Windows operating systems and to /home/<userid>/sqllib/tools on Linux and UNIX-based systems. -tcomms Limits tracing to communications events. -tfilter filter Limits tracing to entries containing the specified filter or filters. Related reference: v CATALOG DATABASE on page 372 v CATALOG DCS DATABASE on page 375 v CATALOG TCPIP/TCPIP4/TCPIP6 NODE on page 387 v db2set - DB2 profile registry on page 245 v GET DATABASE CONFIGURATION on page 457 v RESET DATABASE CONFIGURATION on page 667 v UPDATE DATABASE CONFIGURATION on page 772
44
Command Reference
-u userid -p password
Command parameters: -h/-? Displays help text for the command syntax.
bind capture-file Binds the statements from the capture file and creates one or more packages. -d database_alias Specifies the database alias for the database that will contain one or more packages. -u userid Specifies the user ID to be used to connect to the data source. If a user ID is not specified, a trusted authorization ID is obtained from the system. -p password Specifies the password to be used to connect to the data source. Usage notes: This command must be entered in lowercase on UNIX platforms, but can be entered in either lowercase or uppercase on Windows operating systems. This utility supports many user-specified bind options that can be found in the capture file. In order to change the bind options, open the capture file in a text editor. The SQLERROR(CONTINUE) and the VALIDATE(RUN) bind options can be used to create a package.
45
46
Command Reference
-p
descriptor
-vi versionID
-s
schema
-t
-z
-v
-x
-cb
Command parameters: -d dbname dbname is the name of the database for which the command will query the system catalogs. -h -l -n name Specifies the name of the object. -o outfile Specifies the name of the output file. -p descriptor Specifies the name of the packed descriptor (pd) to display where descriptor is one of the following: check Display table check constraints packed descriptor. rel table Display referential integrity constraint packed descriptor. Display table packed descriptor. Displays usage information. Turns on case sensitivity for the object name.
summary Display summary table packed descriptor. trig view remote Display remote non-relational data sources packed descriptor. ast Display materialized query table packed descriptor. Display table trigger packed descriptor. Display view packed descriptor.
47
server Display server packed descriptor. auth Display privileges held by this grantee on this object.
-vi versionID Specifies the version ID of the package packed descriptor. -vi is only valid when -p sysplan is specified. If versionID is omitted, the default is the empty string. -s schema Specifies the name of the object schema. -t -z -v -x -cb Displays terminal output. Disables keystroke prompt. Validates packed descriptor. This parameter is only valid for table packed descriptors. Validates table space extentsize in catalogs (does not require a table name). Cleans orphan rows from SYSCAT.BUFFERPOOLNODES (does not require a table name).
Usage Notes: v Table name and table schema may be supplied in LIKE predicate form, which allows percent sign (%) and underscore (_) to be used as pattern matching characters to select multiple sources with one invocation. v Prompting will occur for all fields that are not supplied or are incompletely specified (except for the -h and -l options). v If -o is specified without a file name, and -t is not specified, you will be prompted for a file name (the default name is db2cat.out). v If neither -o nor -t is specified, you will be prompted for a file name (the default is terminal output). v If -o and -t are both specified, the output will be directed to the terminal. Related reference: v System catalog views in SQL Reference, Volume 1
48
Command Reference
-ccf filename
-ic
-ict seconds
Command parameters: -rc -hc -tc -j -mv -tm -icc -ca -t Opens the Replication Center. Opens the Health Center. Opens the Task Center. Opens the Journal. Opens the Memory Visualizer. Opens the Identify Indoubt Transaction Manager. Opens the Information Catalog Manager. Opens the Configuration Assistant. Turns on Control Center Trace for an initialization code. On Windows operating systems, the db2cc command does not have a console window. Therefore, this option has no effect on Windows operating systems. Turns on Control Center Trace for an initialization code and saves the output of the trace to the specified file. The output file is saved to <DB2 install path>\sqllib\tools on Windows and to /home/<userid>/sqllib/ tools on UNIX operating systems.
-tf
-tcomms Limits tracing to communications events. -tfilter filter Limits tracing to entries containing the specified filter or filters.
Chapter 1. System Commands
49
-ict seconds Idle Connection Timer. Closes any idle connections in the pools maintained by the Control Center after the number of seconds specified. The default timer is 30 minutes. -h system Opens the Control Center in the context of a system. -i instance Opens the Control Center in the context of an instance. -d database Opens the Control Center in the context of a database. -sub subsystem Opens the Control Center in the context of a subsystem. Related reference: v GET ADMIN CONFIGURATION on page 441 v RESET ADMIN CONFIGURATION on page 663 v UPDATE ADMIN CONFIGURATION on page 756
50
Command Reference
This utility is especially useful for exporting connectivity configuration information at workstations that do not have the DB2 Configuration Assistant installed, and in situations where multiple similar remote DB2 clients are to be installed, configured, and maintained (for example, cloning or making templates of client configurations). Authorization: One of the following: v sysadm v sysctrl Command syntax:
TEMPLATE BACKUP MAINTAIN
db2cfexp filename
Command parameters: filename Specifies the fully qualified name of the target export file. This file is known as a configuration profile. TEMPLATE Creates a configuration profile that is used as a template for other instances of the same instance type (i.e. client instance to client instance). The profile includes information about: v All databases, including related ODBC and DCS information v All nodes associated with the exported databases v Common ODBC/CLI settings v Common client settings in the database manager configuration v Common client settings in the DB2 registry.
Chapter 1. System Commands
51
52
Command Reference
Command parameters: filename Specifies the fully qualified name of the configuration profile to be imported. Valid import configuration profiles are profiles created by any DB2 database or DB2 Connect product using the Configuration Assistant, Control Center, or db2cfexp. Related tasks: v Configuring database connections using a client profile with the Configuration Assistant in Quick Beginnings for DB2 Clients v Exporting and importing a profile in Installation and Configuration Supplement
53
-search=search-expression
-replace=replace-expression
-show
-32
-64
-verbose -v
-help -h
Command parameters: -querypath Specifies that a query should be performed without altering the embedded library path in the binary. -search=search-expression Specifies the expression to be searched for. -replace=replace-expression Specifies the expression that the search-expression is to be replaced with. -show Specifies that the search and replace operations are to be performed without actually writing the changes to the file(s).
54
Command Reference
-verbose Displays information about the operations that are being performed. -help Displays usage information.
Examples: v To change the embedded runtime library search path value in the executable file named myexecutable from /usr/opt/db2_08_01/lib to /u/usr1/sqllib/lib32, issue:
db2chglibpath -search=/usr/opt/db2_08_01/lib -replace=/u/usr1/sqllib/lib32 /mypath/myexecutable
Note that the length of the new value is the same as that of the original value. Usage notes: v This command is only to be used for updating DB2 database application executables and DB2 external routine shared library files when other methods for migrating applications and routines cannot be used or are unsuccessful. See the related links for topics on application and routine migration. v This command is not supported under DB2 service contract agreements. It is provided as-is and as such, IBM is not responsible for its unintended or malicious use. v This command does not create a backup of the shared library or executable file before modifying it. It is strongly recommended that a backup copy of the file be made prior to issuing this command. Related tasks: v Migrating 32-bit database applications to run on 64-bit instances in Migration Guide v Migrating C, C++, and COBOL routines in Migration Guide v Migrating embedded SQL and CLI applications in Migration Guide
55
Command parameters: -d Turns debug mode on. Use this option only when instructed by DB2 Support.
-f file-name Specifies a specific file name to update the runtime path. file-name should have the path name relative to the base of the current DB2 product install location. Examples: v To check all files under the DB2 product install path and do a runtime path update, issue:
<DB2 installation path>/install/db2chgpath
To update the path for a specific file called libdb2.a which is under <DB2 installation path>/lib64 directory, issue:
<DB2 installation path>/install/db2chgpath -f lib64/libdb2.a
Related tasks: v Installing a DB2 product manually in Installation and Configuration Supplement v Manually installing payload files (Linux and UNIX) in Installation and Configuration Supplement
56
Command Reference
Command parameters: -a -c Displays all available information. Displays results of checkbits and checksums.
-cl decompressionLib Indicates the name of the library to be used to perform the decompression. The name must be a fully qualified path referring to a file on the server. If this parameter is not specified, DB2 will attempt to use the library stored in the image. If the backup was not compressed, the value of this parameter will be ignored. If the specified library cannot be loaded, the operation will fail. -co decompressionOpts Describes a block of binary data that will be passed to the initialization routine in the decompression library. DB2 will pass this string directly from the client to the server, so any issues of byte reversal or code page conversion will have to be handled by the decompression library. If the
Chapter 1. System Commands
57
Notes: 1. <startPage> is an object page number that is object-relative. For DMS table spaces:
D <tbspID> <objType> <startPage> <numPages>
Notes: 1. 2. <objType> is only needed if verifying DMS load copy images. <startPage> is an object page number that is pool-relative.
L <log num> <startPos> <numPages>
The default output file is extractPage.out. You can override the default output file name by setting the DB2EXTRACTFILE environment variable to a full path. -h -H Displays media header information including the name and path of the image expected by the restore utility. Displays the same information as -h but only reads the 4K media header information from the beginning of the image. It does not validate the image. This option cannot be used in combination with any other options. Displays log file header (LFH) and mirror log file header (MFH) data. Prompt for tape mount. Assume one tape per device. Displays detailed information from the object headers. Displays the number of pages of each object type. This option will not show the number of pages for all different object types if the backup was done for DMS tablespaces data. It only shows the total of all pages as SQLUDMSTABLESPACEDATA. The object types for SQLUDMSLOBDATA and SQLUDMSLONGDATA will be zero for DMS tablespaces. Displays the automatic storage paths in the image. Displays the same information as -s but does not validate the image. This option cannot be used in combination with any other options. Displays table space details, including container information, for the table spaces in the image.
-l -n -o -p
-s -S -t
58
Command Reference
filename The name of the backup image file. One or more files can be checked at a time. Notes: 1. If the complete backup consists of multiple objects, the validation will only succeed if db2ckbkp is used to validate all of the objects at the same time. 2. When checking multiple parts of an image, the first backup image object (.001) must be specified first. Examples: Example 1 (on UNIX platforms)
db2ckbkp SAMPLE.0.krodger.NODE0000.CATN0000.19990817150714.001 SAMPLE.0.krodger.NODE0000.CATN0000.19990817150714.002 SAMPLE.0.krodger.NODE0000.CATN0000.19990817150714.003 [1] Buffers processed: ## [2] Buffers processed: ## [3] Buffers processed: ## Image Verification Complete - successful.
Example 2
db2ckbkp -h SAMPLE2.0.krodger.NODE0000.CATN0000.19990818122909.001 ===================== MEDIA HEADER REACHED: ===================== Server Database Name -- SAMPLE2 Server Database Alias -- SAMPLE2 Client Database Alias -- SAMPLE2 Timestamp -- 19990818122909 Database Partition Number -- 0 Instance -- krodger Sequence Number -- 1 Release ID -- 900 Database Seed -- 65E0B395 DB Comments Codepage (Volume) -- 0 DB Comment (Volume) -DB Comments Codepage (System) -- 0 DB Comment (System) -Authentication Value -- 255 Backup Mode -- 0 Include Logs -- 0 Compression -- 0 Backup Type -- 0 Backup Gran. -- 0 Status Flags -- 11 System Cats inc -- 1 Catalog Database Partition No. -- 0 DB Codeset -- ISO8859-1 DB Territory -LogID -- 1074717952 LogPath -- /home/krodger/krodger/NODE0000/ SQL00001/SQLOGDIR Backup Buffer Size -- 4194304 Number of Sessions -- 1 Platform -- 0
59
Usage notes: 1. If a backup image was created using multiple sessions, db2ckbkp can examine all of the files at the same time. Users are responsible for ensuring that the session with sequence number 001 is the first file specified. 2. This utility can also verify backup images that are stored on tape (except images that were created with a variable block size). This is done by preparing the tape as for a restore operation, and then invoking the utility, specifying the tape device name. For example, on UNIX based systems:
db2ckbkp -h /dev/rmt0
and on Windows:
db2ckbkp -d \\.\tape1
3. If the image is on a tape device, specify the tape device path. You will be prompted to ensure it is mounted, unless option -n is given. If there are multiple tapes, the first tape must be mounted on the first device path given. (That is the tape with sequence 001 in the header). The default when a tape device is detected is to prompt the user to mount the tape. The user has the choice on the prompt. Here is the prompt and options: (where the device I specified is on device path /dev/rmt0)
Please mount the source media on device /dev/rmt0. Continue(c), terminate only this device(d), or abort this tool(t)? (c/d/t)
The user will be prompted for each device specified, and when the device reaches the end of tape. Related reference: v db2adutl - Managing DB2 objects within TSM on page 15
60
Command Reference
Command parameters: database Specifies an alias name of a database to be scanned. -e -l -u -p Specifies that all local cataloged databases are to be scanned. Specifies a log file to keep a list of errors and warnings generated for the scanned database. Specifies the user ID of the system administrator. Specifies the password of the system administrators user ID.
Usage notes: When an instance is migrated with the db2imigr command, db2ckmig is implicitly called as part of the migration. If you choose to run db2ckmig manually, it must be run for each database after the DB2 instance is installed, but before the instance is migrated. On Linux and UNIX-based systems, this utility is located in the DB2DIR/bin directory, where DB2DIR is the location where the DB2 copy is installed. On Windows platforms, if you select the migrate option during installation, instances are migrated and the installation will prompt you to run db2ckmig. A message box will warn you that if you have a local database on your system, you should run db2ckmig from the CD (it is located in db2\Windows\Utilities. Once you see the message box, you can either choose to ignore the message or quit the installation process. Run db2ckmig and then continue the installation if there are no errors, otherwise quit the installation, fix the error and install again. If you select the Install New option instead, you will have to run db2imigr to migrate the instance which in turn will also run db2ckmig. db2ckmig will not run against databases which are catalogued as remote databases.
61
Related tasks: v Verifying that your databases are ready for migration in Migration Guide
62
Command Reference
-n
tablespace name
-h -u -?
Command parameters: -d database name Specifies the alias name for the database that will be restored. -t timestamp Specifies the timestamp for a backup image that will be incrementally restored. -r Specifies the type of restore that will be executed. The default is database. If TABLESPACE is chosen and no table space names are given, the utility looks into the history entry of the specified image and uses the table space names listed to do the restore.
-n tablespace name Specifies the name of one or more table spaces that will be restored. If a database restore type is selected and a list of table space names is specified, the utility will continue as a table space restore using the table space names given. -h/-u/-? Displays help information. When this option is specified, all other options are ignored, and only the help information is displayed. Examples:
db2ckrst -d mr -t 20001015193455 -r database db2ckrst -d mr -t 20001015193455 -r tablespace db2ckrst -d mr -t 20001015193455 -r tablespace -n tbsp1 tbsp2 > db2 backup db mr Backup successful. The timestamp for this backup image is : 20001016001426 > db2 backup db mr incremental
63
Usage notes: The db2ckrst utility will not be enhanced for the rebuilding of a database. Due to the constraints of the history file, the utility will not be able to supply the correct list if several table spaces need to be restored from more than one image. The database history must exist in order for this utility to be used. If the database history does not exist, specify the HISTORY FILE option in the RESTORE command before using this utility. If the FORCE option of the PRUNE HISTORY command is used, you can delete entries that are required for automatic incremental restoration of databases. Manual restores will still work correctly. Use of this command can also prevent the dbckrst utility from being able to correctly analyse the complete chain of required backup images. The default operation of the PRUNE HISTORY command prevents required entries from being deleted. It is recommended that you do not use the FORCE option of the PRUNE HISTORY command. This utility should not be used as a replacement for keeping records of your backups. Related tasks: v Restoring from incremental backup images in Data Recovery and High Availability Guide and Reference Related reference: v PRUNE HISTORY/LOGFILE on page 607 v RESTORE DATABASE on page 675
64
Command Reference
Command parameters: None Usage notes: DB2 Interactive CLI consists of a set of commands that can be used to design, prototype, and test CLI function calls. It is a programmers testing tool provided for the convenience of those who want to use it, and IBM makes no guarantees about its performance. DB2 Interactive CLI is not intended for end users, and so does not have extensive error-checking capabilities. Two types of commands are supported: CLI commands Commands that correspond to (and have the same name as) each of the function calls that is supported by IBM CLI Support commands Commands that do not have an equivalent CLI function. Commands can be issued interactively, or from within a file. Similarly, command output can be displayed on the terminal, or written to a file. A useful feature of the CLI command driver is the ability to capture all commands that are entered during a session, and to write them to a file, thus creating a command script that can be rerun at a later time. Related concepts: v db2cli.ini initialization file in Call Level Interface Guide and Reference, Volume 1
65
Command parameters: -c or /c Execute command following the -c option in a new DB2 command window, and then terminate. For example, db2cmd -c dir causes the dir command to be invoked in a new DB2 command window, and then the DB2 command window closes. -w or /w Execute command following the -w option in a new DB2 command window, and wait for the new DB2 command window to be closed before terminating the process. For example, db2cmd /w dir invokes the dir command, and the process does not end until the new DB2 command window closes. -i or /i Execute command following the -i option while sharing the same DB2 command window and inheriting file handles. For example, db2cmd -i dir executes the dir command in the same DB2 command window. -t or /t Execute command following the -t option in a new DB2 CLP window with the specified command as the title of this new window. Usage notes: If DB21061E (Command line environment not initialized.) is returned when bringing up the CLP-enabled DB2 window, the operating system may be running out of environment space. Check the config.sys file for the SHELL environment setup parameter, and increase its value accordingly. For example:
SHELL=C:\COMMAND.COM C:\ /P /E:32768
66
Command Reference
Command parameters: Inspection actions /DB /T /TSF /TSC /TS Inspects the entire database. This is the default option. Inspects a single table. Requires two input values: a table space ID, and the table object ID or the table name. Inspects only table space files and containers. Inspects a table spaces constructs, but not its tables. Requires one input value: table space ID. Inspects a single table space and its tables. Requires one input value: table space ID.
/ATSC Inspects constructs of all table spaces, but not their tables. Data formatting actions /DD Dumps formatted table data. Requires five input values: either a table object ID or table name, table space ID, page number to start with, number of pages, and verbose choice. Dumps formatted index data. Requires five input values: either a table object ID or table name, table space ID, page number to start with, number of pages, and verbose choice. Dumps formatted block map data. Requires five input values: either a table object ID or table name, table space ID, page number to start with, number of pages, and verbose choice. Dumps pages in hex format. v For permanent object in DMS tablespace, action /DP requires three input values consisting of tablespace ID, page number to start with, and number of pages. v For permanent object in SMS tablespace, action /DP requires five input values consisting of tablespace ID, object ID, page number to start with, number of pages, and object type.
/DI
/DM
/DP
67
/DXH Dumps formatted XML column data in HEX format. Requires five input values: either a table object ID or table name, table space ID, page number to start with, number of pages, and verbose choice. /LHWM Suggests ways of lowering the high water mark. Requires two input values: table space ID and number of pages. Repair actions /ETS /MI /MT Extends the table limit in a 4 KB table space (DMS only), if possible. Requires one input value: table space ID. Marks index as invalid. When specifying this parameter the database must be offline. Requires two input values: table space ID and index object ID Marks table with drop-pending state. When specifying this parameter the database must be offline. Requires three input values: table space ID, either table object ID or table name, and password. Initializes the data page of a table as empty. When specifying this parameter the database must be offline. Requires five input values: table name or table object ID, table space ID, page number to start with, number of pages, and password.
/IP
Change state actions /CHST Change the state of a database. When specifying this parameter the database must be offline. Requires one input value: database backup pending state. Help /H Displays help information.
Input value options /OI object-id Specifies the object ID. /TN table-name Specifies the table name.
68
Command Reference
/PW password Password required to execute the db2dart action. Contact DB2 Service for a valid password. /RPT path Optional path for the report output file. /RPTN file-name Optional name for the report output file. /PS number Specifies the page number to start with. The page number must be suffixed with p for pool relative. Specifying /PS 0 /NP 0 will cause all pages in the specified object to be dumped. /NP number Specifies the number of pages. Specifying /PS 0 /NP 0 will cause all pages in the specified object to be dumped. /V option Specifies whether or not the verbose option should be implemented. Valid values are: Y N Specifies that the verbose option should be implemented. Specifies that the verbose option should not be implemented.
/SCR option Specifies type of screen output, if any. Valid values are: Y M N Normal screen output is produced. Minimized screen output is produced. No screen output is produced.
/RPTF option Specifies type of report file output, if any. Valid values are: Y E N Normal report file output is produced. Only error information is produced to the report file. No report file output is produced.
/ERR option Specifies type of log to produce in DART.INF, if any. Valid values are: Y N E Produces normal log in DART.INF file. Minimizes output to log DART.INF file. Minimizes DART.INF file and screen output. Only error information is sent to the report file.
69
/QCK option Quick option. Only applies to /DB, /T, and /TS actions. Only inspects page 0 of the DAT objects and partially inspects the index objects (does not inspect BMP, LOB, LF objects and does not traverse the entirety of the DAT or INX objects). /TYP option Specifies the type of object. Valid values are: DAT INX BKM Object type is DAT. Object type is INDEX. Object type is BMP.
Usage notes: 1. When invoking the db2dart command, you can specify only one action. An action can support a varying number of options. 2. If you do not specify all the required input values when you invoke the db2dart command, you will be prompted for the values. For the /DDEL and /IP actions, the options cannot be specified from the command line, and must be entered when prompted by db2dart. 3. The /ROW, /RPT, /RPTN, /SCR, /RPTF, /ERR, and /WHAT DBBP options can all be invoked in addition to the action. They are not required by any of the actions. 4. The /DB, /T and /TS options inspect the specified objects, including associated XML storage objects. The /DB option includes all XML storage objects in the database, the /T option includes XML storage objects associated with the specified table, and the /TS option inspects all XML storage objects whose parent objects exist in the specified table space. As well, the /DEMP option will dump formatted EMP information including that for associated XML storage objects. 5. When db2dart is run against a single table space, all dependent objects for a parent table in that table space are checked, irrespective of the table space in which the dependent objects reside. However, extent map page (EMP) information is not captured for dependent objects that reside outside of the specified table space. EMP information is captured for dependent objects found in the specified table space even when the parent object resides in a table space other than the one specified. Related reference: v rah and db2_all command descriptions in Administration Guide: Implementation
70
Command Reference
Related reference: v dasmigr - Migrate the DB2 administration server on page 6 v dasupdt - Update DAS on page 8 v dasauto - Autostart DB2 administration server on page 3 v dascrt - Create a DB2 administration server on page 4 v dasdrop - Remove a DB2 administration server on page 5
71
Command parameters: -d database-name Specifies the name of the database to which a connection is to be established. -t table-name Specifies the name of the table from which column information is to be retrieved to generate declarations. option One or more of the following: -a action Specifies whether declarations are to be added or replaced. Valid values are ADD and REPLACE. The default value is ADD. -b lob-var-type Specifies the type of variable to be generated for a LOB column. Valid values are: LOB (default) For example, in C, SQL TYPE is CLOB(5K) x. LOCATOR For example, in C, SQL TYPE is CLOB_LOCATOR x. FILE -c For example, in C, SQL TYPE is CLOB_FILE x.
Specifies whether the column name is to be used as a suffix in the field name when a prefix (-n) is specified. If no prefix is specified, this option is ignored. The default behavior is to not use the column name as a suffix, but instead to use the column number, which starts at 1. Specifies whether indicator variables are to be generated. Since host structures are supported in C and COBOL, an indicator table of size equal to the number of columns is generated, whereas for JAVA and FORTRAN, individual indicator variables are generated
-i
72
Command Reference
-p password Specifies the password to be used to connect to the database. It must be specified if a user ID is specified. The default behavior is to provide no password when establishing a connection. -r remarks Specifies whether column remarks, if available, are to be used as comments in the declarations, to provide more detailed descriptions of the fields. -s structure-name Specifies the structure name that is to be generated to group all the fields in the declarations. The default behavior is to use the unqualified table name. -u userid Specifies the user ID to be used to connect to the database. It must be specified if a password is specified. The default behavior is to provide no user ID when establishing a connection. -v Specifies whether the status (for example, the connection status) of the utility is to be displayed. The default behavior is to display only error messages.
-w DBCS-var-type Specifies whether sqldbchar or wchar_t is to be used for a GRAPHIC/VARGRAPHIC/DBCLOB column in C. -y DBCS-symbol Specifies whether G or N is to be used as the DBCS symbol in COBOL. -z encoding Specifies the encoding the coding convention in accordance to the
73
74
Command Reference
-g fieldPatternList -filter fieldPatternList -gi fieldPatternList -gv fieldPatternList -giv fieldPatternList -gvi fieldPatternList
-pid processIDList
-tid threadIDList
-n -node
nodeList
-e -error
errorList
-l -level
levelList
-c -count
-V -verbose
-cbe
-v -invert
-exist
-strict
-rc
rcList switch
-fmt formatString
-o -output
pathName
75
-f -follow
-H -history
-t -time
-A -archive
dirName
-readfile
-ecfid
Command parameters: filename Specifies one or more space-separated path names of DB2 diagnostic logs to be processed. If the file name is omitted, the db2diag.log file from the current directory is processed. If the file is not found, a directory set by the DIAGPATH variable is searched. -h/-help/? Displays help information. When this option is specified, all other options are ignored, and only the help information is displayed. If a list of options, optionList, containing one or more comma separated command parameters is omitted, a list of all available options with short descriptions is displayed. For each option specified in the optionList, more detailed information and usage examples are displayed. Help output can be modified by using one of the following switches in place of the optionList argument to display more information about the tool and its usage: brief Displays help information for all options without examples.
examples Displays a few typical examples to assist in using the tool. tutorial Displays examples that describe advanced features. notes all Displays usage notes and restrictions. Displays complete information about all options, including usage examples for each option.
-fmt formatString Formats the db2diag output using a format string, formatString, containing record fields in the form %field, %{field}, @field or@{field}. The%{field} and
76
Command Reference
%process Name associated with the process ID, in double quotation marks. For example, "db2sysc.exe". %product Product name. For example, DB2 COMMON. %component Component name. %funcname Function name.
77
78
Command Reference
%startevent Start event description. %stopevent Stop event description. %changeevent Change event description. To always display the text preceding a field name (for example, for the required fields), the % field prefix should be used. To display the text preceding a field name when this field contains some data, the@ prefix should be used. Any combination of required and optional fields with the corresponding text descriptions is allowed. The following special characters are recognized within a format string: \n, \r, \f, \v, and \t. In contrast to other fields, the data and argument fields can contain several sections. To output a specific section, add the [n] after the field name where n is a section number (1n64). For example, to output the first data object and the second data description sections, use %{dataobj}[1] and %{datadesc}[2]. When [n] is not used, all sections logged are output using pre-formatted logged data exactly as appears in a log message, so there is no need to add the applicable text description and separating newline before each data field, argument field, or section. -g fieldPatternList fieldPatternList is a comma-separated list of field-pattern pairs in the following format: fieldName operator searchPattern. The operator can be one of the following: = := != !:= ^= !^= Selects only those records that contain matches that form whole words. (Word search.) Selects those records that contain matches in which a search pattern can be part of a larger expression. Selects only non-matching lines. (Invert word match.) Selects only non-matching lines in which the search pattern can be part of a larger expression. Selects records for which the field value starts with the search pattern specified. Selects records for which the field value does not start with the search pattern specified.
The same fields are available as described for the -fmt option, except that the% and @ prefixes are not used for this option -gi fieldPatternList Same as -g, but case-insensitive.
79
-o/-output pathName Saves the output to a file specified by a fully qualified pathName. -f/follow If the input file is a regular file, specifies that the tool will not terminate after the last record of the input file has been processed. Instead, it sleeps for a specified interval of time (sleepInterval), and then attemps to read and process further records from the input file as they become available. This is option can be used when monitoring records being written to a file by another process. The startTime option, can be specified to show all the records logged after this time. The startTime option is specified using the following format: YYYY-MM-DD-hh.mm.ss.nnnnnn, where YYYY Specifies a year. MM DD hh mm ss Specifies a month of a year (01 through 12). Specifies a day of a month (01 through 31). Specifies an hour of a day (00 through 23). Specifies a minute of an hour (00 through 59). Specifies a second of a minute (00 through 59).
80
Command Reference
81
82
Command Reference
When this option is specified, all other options are ignored and output is directed to a display. -ecfid Displays function information extracted from the numeric ecfId. When this option is specified, all other options are ignored. Examples: To display all severe error messages produced by the process with the process ID (PID) 52356 and on node 1, 2 or 3, enter:
db2diag -g level=Severe,pid=952356 -n 1,2,3
To display all messages containing database SAMPLE and instance aabrashk, enter:
db2diag -g db=SAMPLE,instance=aabrashk
To display all severe error messages containing the database field, enter:
db2diag -g db:= -gi level=severe
To display all error messages containing the DB2 ZRC return code 0x87040055, and the application ID G916625D.NA8C.068149162729, enter:
db2diag -g msg:=0x87040055 -l Error | db2diag -gi appid^=G916625D.NA
To display only logged records not containing the LOCAL pattern in the application ID field, enter:
db2diag -gi appid!:=local or db2diag -g appid!:=LOCAL
All records that dont match will be displayed. To output only messages that have the application ID field, enter:
db2diag -gvi appid:=local -exist
To display all messages logged after the one with timestamp 2003-03-0312.16.26.230520 inclusively, enter:
db2diag -time 2003-03-03-12.16.26.230520
To display severe errors logged for the last three days, enter:
db2diag -gi "level=severe" -H 3d
To display all log messages not matching the pdLog pattern for the funcname field, enter:
db2diag -g funcname!=pdLog or db2diag -gv funcn=pdLog
To display all severe error messages containing component name starting from the base sys, enter:
db2diag -l severe | db2diag -g "comp^=base sys"
83
This will force db2diag to process db2diag.log from a directory specified by the DIAGPATH environment variable. To read the db2diag.log1 file from a specified directory ignoring any terminal input, enter:
system(db2diag -readfile /u/usr/sqllib/db2dump/db2diag.log1);
This will display function name, component and product name. Usage notes: 1. Each option can appear only once. They can be specified in any order and can have optional parameters. Short options can not be included together. For example, use -l -eand not -le. 2. By default, db2diag looks for the db2diag.log file in the current directory. If the file is not found, the directory set by the DIAGPATH registry variable is searched next. If the db2diag.log file is not found, db2diag returns an error and exits. 3. Filtering and formatting options can be combined on a single command line to perform complex searches using pipes. The formatting options -fmt, -strict, -cbe, and -verbose should be used only after all filtering is done to ensure that only original logged messages with standard fields will be filtered, not those fields either defined or omitted by the user. It is not necessary to use - when using pipes. 4. When pipes are used and one or more files names are specified on the command line, the db2diag input is processed differently depending on whether the - has been specified or not. If the - is omitted, input is taken from the specified files . In contrast, when the - option is specified, file names (even if present on the command line) are ignored and input from a terminal is used. When a pipe is used and a file name is not specified, the db2diag input is processed exactly the same way with or without the - specified on the command line. 5. The -exist option overwrites the default db2diag behavior for invert match searches when all records that do not match a pattern are output independent of whether they contain the proper fields or not. When the -exist option is specified, only the records containing fields requested are processed and output.
84
Command Reference
85
Command parameters: on off -r -s -c -i -l -p Turns on AS trace events (all if none specified). Turns off AS trace events. Traces DRDA requests received from the DRDA AR. Traces DRDA replies sent to the DRDA AR. Traces the SQLCA received from the DRDA server on the host system. This is a formatted, easy-to-read version of not null SQLCAs. Includes time stamps in the trace information. Specifies the size of the buffer used to store the trace information. Traces events only for this process. If -p is not specified, all agents with incoming DRDA connections on the server are traced. The pid to be traced can be found in the agent field returned by the LIST APPLICATIONS command. Specifies the destination for the trace. If a file name is specified without a complete path, missing information is taken from the current path. If tracefile is not specified, messages are directed to db2drdat.dmp in the current directory. Formats communications buffers.
-t
-f
Usage notes: Do not issue db2trc commands while db2drdat is active. db2drdat writes the following information to tracefile: 1. -r v Type of DRDA request v Receive buffer
86
Command Reference
87
Command parameters: add drop Assigns a new database drive map. Removes an existing database drive map.
query Queries a database map. reconcile Reapplies the database drive mapping to the registry when the registry contents are damaged or dropped accidentally. dbpartition_number The database partition number. This parameter is required for add and drop operations. If this parameter is not specified for a reconcile operation, db2drvmp reconciles the mapping for all database partitions. from_drive The drive letter from which to map. This parameter is required for add and drop operations. If this parameter is not specified for a reconcile operation, db2drvmp reconciles the mapping for all drives. to_drive The drive letter to which to map. This parameter is required for add operations. It is not applicable to other operations. Examples: To set up database drive mapping from F: to E: for NODE0, issue the following command:
db2drvmp add 0 F E
To set up database drive mapping from E: to F: for NODE1, issue the following command:
db2drvmp add 1 E F
Usage notes:
88
Command Reference
89
Command parameters: database-alias Specifies the alias of the database for which multipage file allocation is to be enabled. Usage notes: This utility: v Connects to the database partition (where applicable) in exclusive mode v In all SMS table spaces, allocates empty pages to fill up the last extent in all data and index files which are larger than one extent v Changes the value of the database configuration parameter multipage_alloc to YES v Disconnects. Since db2empfa connects to the database partition in exclusive mode, it cannot be run concurrently on the catalog database partition, or on any other database partition. Multipage file allocation can be enabled using db2empfa for databases that are created after the registry variable DB2_NO_MPFA_FOR_NEW_DB has been set. Related concepts: v SMS table spaces in Administration Guide: Planning
90
Command Reference
Command parameters: The db2eva parameters are optional. If you do not specify parameters, the Open Event Analyzer dialog box appears to prompt you for the database and event monitor name. -db database-alias Specifies the name of the database defined for the event monitor. -evm evmon-name Specifies the name of the event monitor whose traces are to be analyzed. Usage notes: Without the required access, the user cannot retrieve any event monitor data. There are two methods for retrieving event monitor traces: 1. The user can enter db2eva from the command line and the Open Event Analyzer Dialog box opens to let the user choose the database and event monitor names from the drop-down lists before clicking OK to open the Event Analyzer dialog box. 2. The user can specify the -db and -evm parameters from the command line and the Event Analyzer dialog opens on the specified database. The Event Analyzer connects to the database, and issues a select target from SYSIBM.SYSEVENTTABLES to get the event monitor tables. The connection is then released after the required data has been retrieved. The event analyzer can be used to analyze the data produced by an active event monitor. However, event monitor captured after the event analyzer has been invoked might not be shown. Turn off the event monitor before invoking the Event Analyzer to ensure data are properly displayed.
91
92
Command Reference
Command parameters: -db database-alias Specifies the database whose data is to be displayed. This parameter is case sensitive. -evm event-monitor-name The one-part name of the event monitor. An ordinary or delimited SQL identifier. This parameter is case sensitive. -path event-monitor-target Specifies the directory containing the event monitor trace files. Usage notes: If the instance is not already started when db2evmon is issued with the -db and -evm options, the command will start the instance. If the instance is not already started when db2evmon is issued with the -path option, the command will not start the instance. If the data is being written to files, the tool formats the files for display using standard output. In this case, the monitor is turned on first, and any event data in the files is displayed by the tool. To view any data written to files after the tool has been run, reissue db2evmon. If the data is being written to a pipe, the tool formats the output for display using standard output as events occur. In this case, the tool is started before the monitor is turned on. Related concepts:
93
94
Command Reference
, event type
Command parameters: -schema Schema name. If not specified, the table names are unqualified. -partitioned If specified, elements that are only applicable for a partitioned database environment are also generated. -evm The name of the event monitor.
event type Any of the event types available on the CREATE EVENT MONITOR statement, for example, DATABASE, TABLES, TRANSACTIONS. Examples:
db2evtbl -schema smith -evm foo database, tables, tablespaces, bufferpools
Usage notes: Output is written to standard output. Defining WRITE TO TABLE event monitors is more straightforward when using the db2evtbl tool. For example, the following steps can be followed to define and activate an event monitor. 1. Use db2evtbl to generate the CREATE EVENT MONITOR statement. 2. Edit the SQL statement, removing any unwanted columns. 3. Use the CLP to process the SQL statement. (When the CREATE EVENT MONITOR statement is executing, target tables are created.) 4. Issue SET EVENT MONITOR STATE to activate the new event monitor.
95
96
Command Reference
-l
-n name
-s schema
-o -t
outfile
-u userID password
-w timestamp
-# sectnbr
-h
Command parameters: -d dbname Name of the database containing packages. -e schema Explain table SQL schema. -f -g Formatting flags. In this release, the only supported value is O (operator summary). Graph plan. If only -g is specified, a graph, followed by formatted information for all of the tables, is generated. Otherwise, any combination of the following valid values can be specified: O Generate a graph only. Do not format the table contents. T Include total cost under each operator in the graph. I Include I/O cost under each operator in the graph. C Include the expected output cardinality (number of tuples) of each operator in the graph. Respect case when processing package names. Name of the source of the explain request (SOURCE_NAME). -s schema SQL schema or qualifier of the source of the explain request (SOURCE_SCHEMA). -o outfile Output file name. -t Direct the output to the terminal.
-l -n name
97
Usage notes: You will be prompted for any parameter values that are not supplied, or that are incompletely specified, except in the case of the -h and the -l options. If an explain table SQL schema is not provided, the value of the environment variable USER is used as the default. If this variable is not found, the user is prompted for an explain table SQL schema. Source name, source SQL schema, and explain time stamp can be supplied in LIKE predicate form, which allows the percent sign (%) and the underscore (_) to be used as pattern matching characters to select multiple sources with one invocation. For the latest explained statement, the explain time can be specified as -1. If -o is specified without a file name, and -t is not specified, the user is prompted for a file name (the default name is db2exfmt.out). If neither -o nor -t is specified, the user is prompted for a file name (the default option is terminal output). If -o and -t are both specified, the output is directed to the terminal. Related concepts: v Explain tools in Performance Guide v Guidelines for capturing explain information in Performance Guide v Guidelines for using explain information in Performance Guide
98
Command Reference
Command parameters: -d dbname Specifies the database name. -e explain_schema Specifies the schema name of the explain tables to be migrated. -u userID password Specifies the current users ID and password. Related concepts: v The explain tables and organization of explain information in Performance Guide Related tasks: v Migrating explain tables in Migration Guide
99
package-options
dynamic-options
explain-options
-help
connection-options:
-database database-name -user user-id password
output-options:
-output output-file
-terminal
package-options:
-schema schema-name -package package-name
-version version-identifier
-escape escape-character
-noupper
-section section-number
dynamic-options:
-statement query-statement
-stmtfile query-statement-file
-terminator termination-character
-noenv
explain-options:
-graph
-opids
-setup setup-file
100
Command Reference
101
102
Command Reference
These statements make it possible to alter the plan chosen for subsequent dynamic query statements processed by db2expln. If you specify -noenv, then these statement are explained, but not executed. It is necessary to specify either -statement or -stmtfile to explain dynamic query. Both options can be specified in a single invocation of db2expln. explain-options: These options determine what additional information is provided in the explained plans. -graph Show optimizer plan graphs. Each section is examined, and the original optimizer plan graph is constructed as presented by Visual Explain. The generated graph may not match the Visual Explain graph exactly. It is possible for the optimizer graph to show some gaps, based on the information contained within the section plan. For backward compatibility, you can specify -g instead of -graph. -opids Display operator ID numbers in the explained plan. The operator ID numbers allow the output from db2expln to be matched to the output from the explain facility. Not all operators have an ID number and that some ID numbers that appear in the explain facility output do not appear in the db2expln output. For backward compatibility, you can specify -i instead of -opids. -help Shows the help text for db2expln. If this option is specified no packages are explained. Most of the command line is processed in the db2exsrv stored procedure. To get help on all the available options, it is necessary to provide connection-options along with -help. For example, use:
db2expln -help -database SAMPLE
103
As another example, suppose a user has a CLP script file called statements.db2 and wants to explain the statements in the file. The file contains the following statements:
SET PATH=SYSIBM, SYSFUN, DEPT01, DEPT93@ SELECT EMPNO, TITLE(JOBID) FROM EMPLOYEE@
Related concepts: v Description of db2expln and dynexpln output in Performance Guide v Examples of db2expln and dynexpln output in Performance Guide v SQL and XQuery Explain tools in Performance Guide
104
Command Reference
Command parameters: /u usergroup Specifies the name of the user group to be added. If this option is not specified, the default DB2 user group (DB2USERS) is used. /a admingroup Specifies the name of the administration group to be added. If this option is not specified, the default DB2 administration group (DB2ADMNS) is used. /r Specifies that the changes made by previously running db2extsec should be reversed. If you specify this option, all other options are ignored. This option will only work if no other DB2 commands have been issued since the db2extsec command was issued.
Related concepts: v Extended Windows security using DB2ADMNS and DB2USERS groups in Administration Guide: Implementation Related tasks: v Adding your user ID to the DB2ADMNS and DB2USERS user groups (Windows) in Quick Beginnings for DB2 Servers
105
Command parameters: -q Specifies that only the log file name be printed. No error or warning messages will be printed, and status can only be determined through the return code. Valid error codes are: v -100 Invalid input v -101 Cannot open LFH file v -102 Failed to read LFH file v -103 Invalid LFH v -104 Database is not recoverable v -105 LSN too big v -106 Invalid database v -500 Logical error Other valid return codes are: v 0 Successful execution v 99 Warning: the result is based on the last known log file size. -db dbname Specifies the database name which you want to investigate. -file LFH-name Specifies the full path of the LFH file including the file name. input_LSN A 12 or 16 character string that represents the internal (6 or 8 byte) hexadecimal value with leading zeros. Examples:
db2flsn 000000BF0030 Given LSN is contained in log page 2 in log file S0000002.LOG db2flsn -q 000000BF0030 S0000002.LOG db2flsn 000000BE0030 Given LSN is contained in log page 2 in log file S0000001.LOG db2flsn -q 000000BE0030 S0000001.LOG db2flsn -db flsntest 0000000000FA0000
106
Command Reference
Usage notes: v If neither -db nor -file are specified, the tool assumes the LFH file is SQLOGCTL.LFH in the current directory. v The tool uses the logfilsiz database configuration parameter. DB2 records the three most recent values for this parameter, and the first log file that is created with each logfilsiz value; this enables the tool to work correctly when logfilsiz changes. If the specified LSN predates the earliest recorded value of logfilsiz, the tool uses this value, and returns a warning. The tool can be used with database managers prior to UDB Version 5.2; in this case, the warning is returned even with a correct result (obtained if the value of logfilsiz remains unchanged). v This tool can only be used with recoverable databases. A database is recoverable if it is configured with the logarchmeth1 or logarchmeth2 configuration parameters set to a value other than OFF. Related reference: v DB2 log records in Administrative API Reference v SQLU_LSN data structure in Administrative API Reference
107
Command parameters: -m module-path Defines the full path of the fault monitor shared library for the product being monitored. The default is $INSTANCEHOME/sqllib/lib/libdb2gcf. -t service Gives the unique text descriptor for a service. -i instance Defines the instance of the service. -u -U -d -D -k -K -s Brings the service up. Brings the fault monitor daemon up. Brings the instance down. Brings the fault monitor daemon down. Kills the service. Kills the fault monitor daemon. Returns the status of the service.
108
Command Reference
Related concepts: v Fault monitor facility for Linux and UNIX in Data Recovery and High Availability Guide and Reference
109
-b browser Specifies the browser to be used. If it is not specified, db2fs searches for a browser in the directories specified in PATH. For Windows operating systems None Related concepts: v First Steps interface in Quick Beginnings for DB2 Servers Related tasks: v Verifying the installation of DB2 servers using First Steps (Linux and Windows) in Quick Beginnings for DB2 Servers
110
Command Reference
-i instance_name -p
, partition_number
-t timeout
-L
-h -?
Command parameters: -u -d -k -s Starts specified database partition for specified instance on current database partition server (node). Stops specified database partition for specified instance. Removes all processes associated with the specified instance. Returns status of the specified database partition and the specified instance. The possible states are: v Available: The specified databse partition for the specified instance is available for processing. v Operable: The instance is installed but not currently available. v Not operable: The instance will be unable to be brought to available state. Returns the default timeouts for each of the possible actions; you can override all these defaults by specifying a value for the -t parameter.
-o
-i instance_name Instance name to perform action against. If no instance name is specified, the value of DB2INSTANCE is used. If no instance name is specified and DB2INSTANCE is not set, the following error is returned:
db2gcf Error: Neither DB2INSTANCE is set nor instance passed.
111
-h/-?
2. The following example returns the status of the instance stevera on partition 0:
db2gcf -s -p 0 -i stevera
Usage notes: When used together, the -k and -p parameters do not allow all processes to be removed from the specified partition. Rather, all processes on the instance (all partitions) will be removed. Related concepts: v High availability in Data Recovery and High Availability Guide and Reference
112
Command Reference
Command parameters: START database Starts the governor daemon to monitor the specified database. Either the database name or the database alias can be specified. The name specified must be the same as the one specified in the governor configuration file. One daemon runs for each database that is being monitored. In a partitioned database environment, one daemon runs for each database partition. If the governor is running for more than one database, there will be more than one daemon running at that database server. DBPARTITIONNUM db-partition-number Specifies the database partition on which to start or stop the governor daemon. The number specified must be the same as the one specified in the database partition configuration file. config-file Specifies the configuration file to use when monitoring the database. The default location for the configuration file is the sqllib directory. If the specified file is not there, the front-end assumes that the specified name is the full name of the file. log-file Specifies the base name of the file to which the governor writes log records. The log file is stored in the log subdirectory of the sqllib directory. The number of database partitions on which the governor is running is automatically appended to the log file name. For example, mylog.0, mylog.1, mylog.2. STOP database Stops the governor daemon that is monitoring the specified database. In a
113
114
Command Reference
rectype record-type
Command parameters: log-file The base name of one or more log files that are to be queried. dbpartitionnum db-partition-number Number of the database partition on which the governor is running. rectype record-type The type of record that is to be queried. Valid record types are: v START v v v v v v v FORCE NICE ERROR WARNING READCFG STOP ACCOUNT
Compatibilities: For compatibility with versions earlier than Version 8: v The keyword nodenum can be substituted for dbpartitionnum. Related reference: v db2gov - DB2 governor on page 113
115
-g database-partition-group-name
-t table-name
-h
Command parameters: -d Specifies the name of the database for which to generate a distribution map. If no database name is specified, the value of the DB2DBDFT environment variable is used. If DB2DBDFT is not set, the default is the SAMPLE database. Specifies the fully qualified file name where the distribution map will be saved. The default is db2split.map. Specifies the name of the database partition group for which to generate a distribution map. The default is IBMDEFAULTGROUP. Specifies the table name. Displays usage information.
-m -g -t -h
Examples: The following example extracts the distribution map for a table ZURBIE.SALES in database SAMPLE into a file called C:\pmaps\zurbie_sales.map:
db2gpmap -d SAMPLE -m C:\pmaps\zurbie_sales.map -t ZURBIE.SALES
116
Command Reference
Command Parameters: -t Turns on NavTrace for initialization code. You should use this option only when instructed to do so by DB2 Support.
-tcomms Limits tracing to communication events. You should use this option only when instructed to do so by DB2 Support. -tfilter filter Limits tracing to entries containing the specified filter or filters. You should use this option only when instructed to do so by DB2 Support. Related concepts: v Graphical tools for the health monitor in System Monitor Guide and Reference Related tasks: v Configuring health indicators using the Health Center in System Monitor Guide and Reference
117
Command parameters: -on -off Enables auto-start for the specified instance. Disables auto-start for the specified instance.
instance-name The login name of the instance. Related tasks: v Auto-starting instances in Administration Guide: Implementation
118
Command Reference
DROP Removes an MSCS node from a DB2 MSCS instance. MIGRATE Migrates a non-MSCS instance to an MSCS instance. UNMIGRATE Undoes the MSCS migration. /DAS:DAS name Specifies the DAS name. This option is required when performing the cluster operation against the DB2 administration server. /c:cluster name Specifies the MSCS cluster name if different from the default/current cluster. /p:instance profile path Specifies the instance profile path. This path must reside on a cluster disk so it is accessible when DB2 is active on any machine in the MSCS cluster. This option is required when migrating a non-MSCS instance to an MSCS instance. /u:username,password Specifies the account name and password for the DB2 service. This option is required when adding another MSCS node to the DB2 MSCS partitioned database instance.
119
3. From the Windows Services dialog box, ensure that the instance is configured to start manually. 4. If the DB2 instance is running, stop it with the DB2STOP command. 5. Install the DB2 resource type from WA26:
c:>db2wolfi i ok
If the db2wolfi command returns Error : 183, then it is already installed. To confirm, the resource type can be dropped and added again. Also, the resource type will not show up in Cluster Administrator if it does not exist.
c:>db2wolfi u ok c:>db2wolfi i ok
6. From WA26, use the db2iclus command to transfrom the DB2 instance into a clustered instance.
c:\>db2iclus migrate /i:db2 /c:mycluster /m:wa26 /p:p:\db2profs DBI1912I The DB2 Cluster command was successful. Explanation: The user request was successfully processed. User Response: No action required.
The directory p:\db2profs should be on a clustered drive and must already exist. This drive should also be currently owned by machine WA26. 7. From WA26, use the db2iclus command to add other machines to the DB2 cluster list:
c:\>db2iclus add /i:db2 /c:mycluster /m:wa27 DBI1912I The DB2 Cluster command was successful. Explanation: The user request was successfully processed. User Response: No action required.
This command should be executed for each subsequent machine in the cluster. 8. From Cluster Administrator, create a new group called DB2 Group. 9. From Cluster Administrator, move the Physical Disk resources Disk O and Disk P into DB2 Group. 10. From Cluster Administrator, create a new resource type of type IP Address called mscs5 that resides on the Public Network. This resource should also belong to DB2 Group. This will be a highly available IP address, and this
120
Command Reference
11.
Usage notes: To migrate an instance to run in an MSCS failover environment, you need to migrate the instance on the current machine first, then add other MSCS nodes to the instance using the db2iclus with the ADD option. To revert an MSCS instance back to a regular instance, you first need to remove all other MSCS nodes from the instance by using the db2iclus with the DROP option. Next, you should undo the migration for the instance on the current machine. Related reference: v db2icrt - Create instance on page 122 v db2idrop - Remove instance on page 125 v db2imigr - Migrate instance on page 128 v db2stop - Stop DB2 on page 272
121
-p InstProfPath
-h HostName
-r PortRange
-?
Command parameters: For Linux and UNIX-based systems -h or -? Displays the usage information. -d Turns debug mode on. Use this option only when instructed by DB2 Support.
-a AuthType Specifies the authentication type (SERVER, CLIENT or SERVER_ENCRYPT) for the instance. The default is SERVER. -p PortName Specifies the port name or number used by the instance. This option does not apply to client instances.
122
Command Reference
-u Fenced ID Specifies the name of the user ID under which fenced user-defined functions and fenced stored procedures will run. The -u option is requred if you are not creating a client instance. InstName Specifies the name of the instance which is also the name of an existing user in the operating system. For Windows operating systems InstName Specifies the name of the instance. -s InstType Specifies the type of instance to create. Valid values are: client Used to create an instance for a client. Use this value if you are using DB2 Connect Personal Edition.
standalone Used to create an instance for a database server with local clients. ese wse Used to create an instance for a database server with local and remote clients. Used to create an instance for DB2 Workgroup Server Edition, DB2 Express Edition and DB2 Connect Enterprise Edition.
-u Username, Password Specifies the account name and password for the DB2 service. This option is required when creating a partitioned database instance. -p InstProfPath Specifies the instance profile path. -h HostName Overrides the default TCP/IP host name if there is more than one for the current machine. The TCP/IP host name is used when creating the default database partition (database partition 0). This option is only valid for partitioned database instances. -r PortRange Specifies a range of TCP/IP ports to be used by the partitioned database instance when running in MPP mode. For example, -r 50000,50007. The services file of the local machine will be updated with the following entries if this option is specified:
DB2_InstName DB2_InstName_END baseport/tcp endport/tcp
123
Examples: v On an AIX machine, to create an instance for the user ID db2inst1, issue the following command: On a client machine:
DB2DIR/instance/db2icrt db2inst1
On a server machine:
DB2DIR/instance/db2icrt -u db2fenc1 db2inst1
where db2fenc1 is the user ID under which fenced user-defined functions and fenced stored procedures will run. Usage notes: v The -s option is intended for situations in which you want to create an instance that does not use the full functionality of the system. For example, if you are using Enterprise Server Edition (ESE), but do not want partition capabilities, you could create a Workgroup Server Edition (WSE) instance, using the option -s WSE. v To create a DB2 instance that supports Microsoft Cluster Server, first create an instance, then use the db2iclus command to migrate it to run in a MSCS instance. v Only once instance can be created under a user name. If you want to create an instance under a user name that already has a related instance, you must drop instance before creating the new one. Related concepts: v User, user ID and group naming rules in Administration Guide: Implementation Related reference: v db2iclus - Microsoft cluster server on page 119
124
Command Reference
Command parameters: For Linux and UNIX-based systems InstName Specifies the name of the instance. -d -f -h or -? Displays the usage information. For Windows Operating Systems InstName Specifies the name of the instance. -f -h Specifies the force applications flag. If this flag is specified all the applications using the instance will be forced to terminate. Displays usage information. Enters debug mode, for use by DB2 Service. Specifies the force applications flag. If this flag is specified all the applications using the instance will be forced to terminate.
Examples: v If you created db2inst1 on a Linux and UNIX-based system by issuing the following command:
Chapter 1. System Commands
125
Usage notes: v In a partitioned database environment, if more than one database partition belongs to the instance that is being dropped, the db2idrop command has to be run on each database partition so that the DB2 registry on each database partition is updated. v Before an instance is dropped, ensure that the DB2 database manager has been stopped and that DB2 database applications accessing the instance are disconnected and terminated. DB2 databases associated with the instance can be backed up, and configuration data saved for future reference if needed. v The db2idrop command does not remove any databases. Please remove the databases first if they are no longer required. If the databases are not removed, they can always be catalogued under another DB2 copy of the same release and continued to be used. Related reference: v db2icrt - Create instance on page 122
126
Command Reference
Related tasks: v Removing instances in Administration Guide: Implementation v Setting up the DB2 administration server (DAS) to use the Configuration Assistant and the Control Center in Administration Guide: Implementation v Updating instance configuration on UNIX in Administration Guide: Implementation Related reference: v db2iupdt - Update instances on page 135
127
/q
/a: authType
/?
Command parameters: For Linux and UNIX-based systems -d Turns debug mode on. Use this option only when instructed by DB2 Support.
-a AuthType Specifies the authentication type (SERVER, CLIENT or SERVER_ENCRYPT) for the instance. The default is SERVER. -u FencedID Specifies the name of the user ID under which fenced user-defined functions and fenced stored procedures will run. This option is optional if a DB2 client only is installed.
128
Command Reference
/a:authType Specifies, authType, the authentication type (SERVER, CLIENT, or SERVER_ENCRYPT) for the instance . /? Displays usage information for the db2imigr command.
Usage notes: For Linux and UNIX-based systems v The db2imigr command removes any symbolic links that exist in /usr/lib and /usr/include in version you are migrating from. If you have applications that load libdb2 directly from /usr/lib rather than using the operating systems library environment variable to find it, your applications might fail to execute properly after you have run db2imigr. v If you use the db2imigr command to migrate a DB2 instance from a previous version to the current version of a DB2 database system, the DB2 Global Profile Variables defined in an old DB2 database installation path will not be migrated over to the new installation location. The DB2 Instance Profile Varibles specific to the instance to be migrated will be carried over after the instance is migrated. Related concepts: v Migration overview for DB2 servers in Migration Guide Related tasks: v Migrating instances in Migration Guide Related reference: v dasmigr - Migrate the DB2 administration server on page 6 v db2ckmig - Database pre-migration tool on page 61
129
RELOCATE USING
configFile
Command parameters: database_alias Specifies the alias of the database to be initialized. SNAPSHOT Specifies that the mirrored database will be initialized as a clone of the primary database. STANDBY Specifies that the database will be placed in roll forward pending state. New logs from the primary database can be fetched and applied to the standby database. The standby database can then be used in place of the primary database if it goes down. MIRROR Specifies that the mirrored database is to be used as a backup image which can be used to restore the primary database. RELOCATE USING configFile Specifies that the database files are to be relocated based on the information listed in the specified configFile prior to initializing the database as a snapshot, standby, or mirror. The format of configFile is described in db2relocatedb - Relocate database command. Usage notes: Do not issue the db2 connect to <database> command before issuing the db2init <database> as mirror command. Attempting to connect to a split mirror database before initializing it erases the log files needed during roll forward recovery. The connect sets your database back to the state it was in when you suspended the database. If the database is marked as consistent when it was suspended, the DB2
130
Command Reference
131
Command Parameters: data-file The unformatted inspection results file to format. out-file The output file for the formatted output. -tsi n -ti n -e -s -w Table space ID. Format out only for tables in this table space. Table ID. Format out only for table with this ID, table space ID must also be provided. Format out errors only. Summary only. Warnings only.
Related reference: v INSPECT on page 512 v db2Inspect API - Inspect database for architectural integrity in Administrative API Reference
132
Command Reference
-r response_file
-? -h
Command parameters: -i language-code Two letter code for the preferred language in which to run the install. If unspecified, this parameter will default to the locale of the current user. -l logfile Full path and name of the log file. If no name is specified, the path and filename default to /tmp/db2isetup.log -t tracefile The full path and name of trace file specified by tracefile. -r response_file Full path and file name of the response file to use. -?, -h Output usage information.
Usage notes: 1. This instance setup wizard provides a subset of the functionality provided by the DB2 Setup wizard. The DB2 Setup wizard (which runs from the installation media) allows you to install DB2 components, do system setup tasks such as DAS creation/configuration, and set up instances. The DB2 Instance Setup wizard only provides the functionality pertaining to instance setup. 2. The executable file for this command is located in the DB2DIR/instance directory, along with other instance scripts such as db2icrt and db2iupdt. DB2DIR represents the installation location where the current version of the DB2 database system is installed. Like these other instance scripts, it requires root authority. It is available in a typical install, but not in a compact install. 3. db2isetup runs on all supported Linux and UNIX-based systems. Related concepts: v DB2 installation methods in Quick Beginnings for DB2 Servers Related reference:
Chapter 1. System Commands
133
134
Command Reference
-u FencedID
InstName -e
135
/r: baseport,endport
/h: hostname
/s
/q
/a: authType
/?
Command parameters: For UNIX operating systems -h or -? Displays the usage information. -d -k -D -s Turns debug mode on. Keeps the current instance type during the update. Moves an instance from a higher code level on one path to a lower code level installed on another path. Ignores the existing SPM log directory.
-a AuthType Specifies the authentication type (SERVER, SERVER_ENCRYPT or CLIENT) for the instance. The default is SERVER. -u Fenced ID Specifies the name of the user ID under which fenced user defined functions and fenced stored procedures will run. This option is only needed when converting an instance from a client instance to a server instance type. If an instance is already a server instance, or if an instance is a client instance and is staying as a client instance (by using the -k option), the -u option is not needed. The -u option cannot change the fenced user for an existing instance. InstName Specifies the name of the instance. -e Updates every instance.
For Windows operating systems InstName Specifies the name of the instance. /u:username,password Specifies the account name and password for the DB2 service. /p:instance-profile-path Specifies the new instance profile path for the updated instance. /r:baseport,endport Specifies the range of TCP/IP ports to be used by the partitioned database instance when running in MPP mode. When this option is specified, the services file on the local machine will be updated with the following entries:
DB2_InstName DB2_InstName_END baseport/tcp endport/tcp
/h:hostname Overrides the default TCP/IP host name if there are more than one TCP/IP host names for the current machine. /s Updates the instance to a partitioned instance.
136
Command Reference
/a:authType Specifies, authType, the authentication type (SERVER, CLIENT, or SERVER_ENCRYPT) for the instance . /? Displays usage information for the db2iupdt command.
Examples (UNIX): 1. If you have an instance db2inst1 related to the installation path DB2DIR and you applied a Fix Pack on top of the installation path, you may need to update the instance by running the following command:
<DB2DIR>/instance/db2iupdt db2inst1
This will bring up the instance to the highest type of instance. To keep the original instance type, you might need to use the -k option. 2. An instance, db2inst2, is related to the installation path DB2DIR1. You have another installation of the DB2 database product on the same system at DB2DIR2 for the same version of the DB2 database product as that installed on DB2DIR1. To update the instance to use the installed DB2 database product from DB2DIR1 to DB2DIR2, issue the following command:
<DB2DIR2>/instance/db2iupdt db2inst2
If the DB2 database product installed at DB2DIR2 is at level lower than that at DB2DIR1, issue:
<DB2DIR2>/instance/db2iupdt -D db2inst2
Usage notes: For UNIX operating systems v If you use the db2iupdt command to update a DB2 instance from another installation location to the current installation location, the DB2 Global Profile Variables defined in an old DB2 database installation path will not be updated over to the new installation location. The DB2 Instance Profile Varibles specific to the instance to be updated will be carried over after the instance is updated. Related tasks: v Updating instance configuration on Windows in Administration Guide: Implementation v Updating instance configuration on UNIX in Administration Guide: Implementation Related reference: v db2ilist - List instances on page 127
137
, -tracelevel TRACE_ALL TRACE_CONNECTION_CALLS TRACE_CONNECTS TRACE_DIAGNOSTICS TRACE_DRDA_FLOWS TRACE_DRIVER_CONFIGURATION TRACE_NONE TRACE_PARAMETER_META_DATA TRACE_RESULT_SET_CALLS TRACE_RESULT_SET_META_DATA TRACE_STATEMENT_CALLS
Command parameters:
138
Command Reference
-url jdbc:db2://server:port/dbname Specifies a JDBC URL for establishing the database connection. The DB2 JDBC type 4 driver is used to establish the connection. -user username Specifies the name used when connecting to a database. -password password Specifies the password for the user name. -collection collection ID The collection identifier (CURRENT PACKAGESET), to use for the packages. The default is NULLID. Use this to create multiple instances of the package set. This option can only be used in conjunction with the Connection or DataSource property currentPackageSet. -size number of packages The number of internal packages to bind for each DB2 transaction isolation level and holdability setting. The default is 3. Since there are four DB2 isolation levels and two cursor holdability settings, there will be 4x2=8 times as many dynamic packages bound as are specified by this option. In addition, a single static package is always bound for internal use. -tracelevel Identifies the level of tracing, only required for troubleshooting.
139
Command parameters: -u users Distinguished Name Specifies the LDAP users Distinguished Name to be used when accessing the LDAP directory. As shown in the example below, the Distinguished name has several parts: the user ID, such as jdoe, the domain and organization names and the suffix, such as com or org. -w password Specifies the password. -r Removes the users DN and password from the machine environment.
Example:
db2ldcfg -u "uid=jdoe,dc=mydomain,dc=myorg,dc=com" -w password
Usage notes: In an LDAP environment using an IBM LDAP client, the default LDAP users DN and password can be configured for the current logon user. Once configured, the LDAP users DN and password are saved in the users environment and used whenever DB2 accesses the LDAP directory. This eliminates the need to specify the LDAP users DN and password when issuing the LDAP command or API. However, if the LDAP users DN and password are specified when the command or API is issued, the default settings will be overridden. This command can only be run when using an IBM LDAP client. On a Microsoft LDAP client, the current logon users credentials will be used. Related tasks: v Configuring the LDAP user for DB2 applications in Administration Guide: Implementation Related reference: v CATALOG LDAP DATABASE on page 377
140
Command Reference
Examples: On Windows operating systems, the db2level command shows the DB2 copy name. For example:
DB21085I Instance "DB2" uses "32" bits and DB2 code release "SQL09010" with level identifier "01010107". Informational tokens are "DB2 v9.1.0.189", "n060119", "", and Fix Pack "0". Product is installed at "c:\SQLLIB" with DB2 Copy Name "db2build".
On Linux and UNIX based operating systems, the db2level command does not show the DB2 copy name. For example:
DB21085I Instance "wqzhuang" uses "64" bits and DB2 code release "SQL09010" with level identifier "01010107". Informational tokens are "DB2 v9.1.0.0", "n060124", "", and Fix Pack "0". Product is installed at "/home/wqzhuang/sqllib".
The information output by the command includes Release, Level, and various informational tokens. Related concepts: v Identifying the version and service level of your product in Troubleshooting Guide Related reference: v ENV_INST_INFO administrative view Retrieve information about the current instance in Administrative SQL Routines and Views
141
-r -u -c -n -g -x -l
SHOW DETAIL -v -h -?
Command parameters: -a filename Adds a license for a product. Specify a file name containing valid license information. This can be obtained from your licensed product CD or by contacting your IBM representative or authorized dealer. -e product-identifier Updates the enforcement policy on the system. Valid values are: HARD and SOFT. HARD specifies that unlicensed requests will not be allowed. SOFT specifies that unlicensed requests will be logged but not restricted. -p product-identifier keyword Updates the license policy type to use on the system. Specify OFF to turn off all policies. -r product-identifier Removes the license for a product. After the license is removed, the product functions in Try & Buy mode. To get the password for a specific product, invoke the command with the -l option. -u product-identifier num-users Updates the number of user licenses that the customer has purchased. Specify the product identifier and the number of users.
142
Command Reference
-v -h/-?
Examples:
db2licm db2licm db2licm db2licm db2licm -a -p -r -u -n db2ese.lic db2wse registered concurrent db2ese db2wse 10 db2ese 8
Related tasks: v Registering a DB2 product or feature license key using the db2licm command in Installation and Configuration Supplement
143
Command parameters: None. Related tasks: v Attaching a direct disk access device in Administration Guide: Implementation
144
Command Reference
Command parameters: path -all Full path and name of the DB2TSCHG.HIS file. Displays more detailed information.
Examples:
db2logsForRfwd /home/ofer/ofer/NODE0000/S0000001/DB2TSCHG.HIS db2logsForRfwd DB2TSCHG.HIS -all
Related concepts: v Rolling forward changes in a table space in Data Recovery and High Availability Guide and Reference
145
-dp -v Vname
-t
Tname
146
Command Reference
-h
-o Fname
-a
-m -c -r
-l
-x
-xd
-f
-td delimiter
-noview
-i userid -w password
-nofed
Command parameters: -d DBname Alias name of the production database that is to be queried. DBname can be the name of a DB2 Database for Linux, UNIX, and Windows or DB2 Version 9.1 for z/OS (DB2 for z/OS) database. If the DBname is a DB2 for z/OS database, the db2look utility will extract the DDL and UPDATE statistics statements for OS/390 and z/OS objects. These DDL and UPDATE statistics statements are statements applicable to a DB2 Database for Linux, UNIX, and Windows database and not to a DB2 for z/OS database. This is useful for users who want to extract OS/390 and z/OS objects and recreate them in a DB2 Database for Linux, UNIX, and Windows database. If DBname is a DB2 for z/OS database, the output of the db2look command is limited to the following: v Generate DDL for tables, indexes, views, and user-defined distinct types v Generate UPDATE statistics statements for tables, columns, column distributions and indexes -e Extract DDL statements for database objects. DDL for the following database objects are extracted when using the -e option: v Tables v Views v Materialized query tables (MQT) v Aliases v Indexes v Triggers v v v v v v v v v v v Sequences User-defined distinct types Primary key, referential integrity, and check constraints User-defined structured types User-defined functions User-defined methods User-defined transforms Wrappers Servers User mappings Nicknames
Chapter 1. System Commands
147
The DDL generated by the db2look command can be used to recreate user-defined functions successfully. However, the user source code that a particular user-defined function references (the EXTERNAL NAME clause, for example) must be available in order for the user-defined function to be usable. -u Creator Creator ID. Limits output to objects with this creator ID. If option -a is specified, this parameter is ignored. The output will not include any inoperative objects. To display inoperative objects, use the -a option. -z schema Schema name. Limits output to objects with this schema name. The output will not include any inoperative objects. To display inoperative objects, use the -a option. If this parameter is not specified, objects with all schema names are extracted. If the -a option is specified, this parameter is ignored. This option is ignored for the federated DDL. -t Tname1 Tname2 ... TnameN Table name list. Limits the output to particular tables in the table list. The maximum number of tables is 30. Table names are separated by a blank space. Case-sensitive names must be enclosed inside a backward slash and double quotation delimiter, for example, \ MyTabLe \. For multiple-word table names, the delimiters must be placed within quotation marks (for example, \My Table\) to prevent the pairing from being evaluated word-by-word by the command line processor. If a multiple-word table name is not enclosed by the backward slash and double delimiter (for example, My Table), all words will be converted into uppercase and the db2look command will look for an uppercase table (for example, MY TABLE). When -t is used with -l, the combination does not support partitioned tables in DB2 Version 9.1. -tw Tname Generates DDL for table names that match the pattern criteria specified by Tname. Also generates the DDL for all dependent objects of all returned tables. Tname can be a single value only. The underscore character (_) in Tname represents any single character. The percent sign (%) represents a string of zero or more characters. Any other character in Tname only represents itself. When -tw is specified, the -t option is ignored. -ct Generate DDL by object creation time. Generating DDL by object creation time will not guarantee that all the object DDLs will be displayed in correct dependency order. The db2look command only supports the following options if the -ct option is also specified: -e, -a, -u, -z, -t, -tw, -v, -l, -noview. Generate DROP statement before CREATE statement. The DROP statement might not work if there is an object that depends on the dropped object. For example, dropping a schema will fail if there is a table that depends on the dropped schema, or dropping a user-defined type/function will fail if there is any other type, function, trigger, or table that depends on it. For typed tables, the DROP TABLE HIERARCHY statement will be generated
-dp
148
Command Reference
-o Fname If using LaTeX format, write the output to filename .tex. If using plain text format, write the output to filename.txt. Otherwise, write the output to filename.sql. If this option is not specified, output is written to standard output. If a filename is specified with an extension, the output will be written into that file. -a When this option is specified the output is not limited to the objects created under a particular creator ID. All objects, including inoperative objects, created by all users are considered. For example, if this option is specified with the -e option, DDL statements are extracted for all objects in the database. If this option is specified with the -m option, UPDATE statistics statements are extracted for all user created tables and indexes in the database. If neither -u nor -a is specified, the environment variable USER is used. On UNIX operating systems, this variable does not have to be explicitly set; on Windows systems, however, there is no default value for the USER environment variable: a user variable in the SYSTEM variables must be set, or a set USER=<username> must be issued for the session. Generates the required UPDATE statements to replicate the statistics on tables, statistical views, columns and indexes. -c When this option is specified in conjunction with the -m option, the db2look command does not generate COMMIT, CONNECT and CONNECT RESET statements. The default action is to generate these statements. When this option is specified in conjunction with the -m option, the db2look command does not generate the RUNSTATS command. The default action is to generate the RUNSTATS command.
-m
-r
-l
If this option is specified, then the db2look command will generate DDL for user defined table spaces, database partition groups and buffer pools. DDL for the following database objects is extracted when using the -l option: v User-defined table spaces v User-defined database partition groups v User-defined buffer pools If this option is specified, the db2look command will generate authorization DDL (GRANT statement, for example). The supported authorizatons include: v Table: ALTER, SELECT, INSERT, DELETE, UPDATE, INDEX, REFERENCE, CONTROL v View: SELECT, INSERT, DELETE, UPDATE, CONTROL
Chapter 1. System Commands
-x
149
If this option is specified, the db2look command will generate all authorization DDL including authorization DDL for objects whose authorizations were granted by SYSIBM at object creation time. Use this option to extract the configuration parameters and registry variables that affect the query optimizer. The db2look command generates an update command for the following configuration parameters: v Database manager configuration parameters cpuspeed intra_parallel comm_bandwidth nodetype federated fed_noauth v Database configuration parameters locklist dft_degree maxlocks avg_appls stmtheap dft_queryopt The db2look command generates the db2set command for the following DB2 registry variables: v DB2_PRED_FACTORIZE v DB2_CORRELATED_PREDICATES v DB2_LIKE_VARCHAR v DB2_SORT_AFTER_TQ v DB2_ORDERED_NLJN v DB2_NEW_CORR_SQ_FF v DB2_PART_INNER_JOIN v DB2_INTERESTING_KEYS
-f
-td delimiter Specifies the statement delimiter for SQL statements generated by the db2look command. If this option is not specified, the default is the
150
Command Reference
-xdir dirname Places exported XML-related files into the given path. If this option is not specified, all XML-related files will be exported into the current directory. Examples: v Generate the DDL statements for objects created by user walid in database DEPARTMENT. The db2look output is sent to file db2look.sql:
db2look -d department -u walid -e -o db2look.sql
151
v Generate the UPDATE statements to replicate the statistics for the database objects created by user walid in database DEPARTMENT. The output is sent to file db2look.sql:
db2look -d department -u walid -m -o db2look.sql
v Generate both the DDL statements for the objects created by user walid and the UPDATE statements to replicate the statistics on the database objects created by the same user. The db2look output is sent to file db2look.sql:
db2look -d department -u walid -e -m -o db2look.sql
v Generate the DDL statements for objects created by all users in the database DEPARTMENT. The db2look output is sent to file db2look.sql:
db2look -d department -a -e -o db2look.sql
v Generate the DDL statements for all user-defined database partition groups, buffer pools and table spaces. The db2look output is sent to file db2look.sql:
db2look -d department -l -o db2look.sql
v Generate the UPDATE statements for optimizer-related database and database manager configuration parameters, as well as the db2set statements for optimizer-related registry variables in database DEPARTMENT. The db2look output is sent to file db2look.sql:
db2look -d department -f -o db2look.sql
v Generate the DDL for all objects in database DEPARTMENT, the UPDATE statements to replicate the statistics on all tables and indexes in database DEPARTMENT, the GRANT authorization statements, the UPDATE statements for optimizer-related database and database manager configuration parameters, the db2set statements for optimizer-related registry variables, and the DDL for all user-defined database partition groups, buffer pools and table spaces in database DEPARTMENT. The output is sent to file db2look.sql.
db2look -d department -a -e -m -l -x -f -o db2look.sql
v Generate all authorization DDL statements for all objects in database DEPARTMENT, including the objects created by the original creator. (In this case, the authorizations were granted by SYSIBM at object creation time.) The db2look output is sent to file db2look.sql:
db2look -d department -xd -o db2look.sql
v Generate the DDL statements for objects created by all users in the database DEPARTMENT. The db2look output is sent to file db2look.sql:
db2look -d department -a -e -td % -o db2look.sql
v Generate the DDL statements for objects in database DEPARTMENT, excluding the CREATE VIEW statements. The db2look output is sent to file db2look.sql:
db2look -d department -e -noview -o db2look.sql
v Generate the DDL statements for objects in database DEPARTMENT related to specified tables. The db2look output is sent to file db2look.sql:
db2look -d department -e -t tab1 \"My TaBlE2\" -o db2look.sql
152
Command Reference
v Generate a script file that includes only non-federated DDL statements. The following system command can be run against a federated database (FEDDEPART) and yet only produce output like that found when run against a database which is not federated. The db2look output is sent to a file out.sql:
db2look -d feddepart -e -nofed -o out
v Generate the DDL statements for objects that have schema name walid in the database DEPARTMENT. The files required to register any included XML schemas and DTDs are exported to the current directory. The db2look output is sent to file db2look.sql:
db2look -d department -z walid -e -xs -o db2look.sql
v Generate the DDL statements for objects created by all users in the database DEPARTMENT. The files required to register any included XML schemas and DTDs are exported to directory /home/ofer/ofer/. The db2look output is sent to standard output:
db2look -d department -a -e -xs -xdir /home/ofer/ofer/
Usage notes: On Windows operating systems, the db2look command must be run from a DB2 command window. Several of the existing options support a federated environment. The following db2look command line options are used in a federated environment: v -e When used, federated DDL statements are generated. v -x When used, GRANT statements are generated to grant privileges to the federated objects. v -xd When used, federated DDL statements are generated to add system-granted privileges to the federated objects. v -f When used, federated-related information is extracted from the database manager configuration. v -m When used, statistics for nicknames are extracted. The ability to use federated systems needs to be enabled in the database manager configuration in order to create federated DDL statements. After the db2look command generates the script file, you must set the federated configuration parameter to YES before running the script. You need to modify the output script to add the remote passwords for the CREATE USER MAPPING statements.
153
v To generate the DDL statements for objects in the DEPARTMENT database associated with tables that have a d as the second character of the name and to send the output to the db2look.sql file:
db2look -d department -e -tw _d% -o db2look.sql
v The db2look command uses the LIKE predicate when evaluating which table names match the pattern specified by the Tname argument. Because the LIKE predicate is used, if either the _ character or the % character is part of the table name, the backslash (\) escape character must be used immediately before the _ or the %. In this situation, neither the _ nor the % can be used as a wildcard character in Tname. For example, to generate the DDL statements for objects in the DEPARTMENT database associated with tables that have a percent sign in the neither the first nor the last position of the name:
db2look -d department -e -tw string\%string
v Case-sensitive and multi-word table names must be enclosed by both a backslash and double quotation marks. For example:
\"My TabLe\"
. v The -tw option can be used with the -x option (to generate GRANT privileges), the -m option (to return table and column statistics), and the -l option (to generate the DDL for user-defined table spaces, database partition groups, and buffer pools). If the -t option is specified with the -tw option, the -t option (and its associated Tname argument) is ignored. v The -tw option cannot be used to generate the DDL for tables (and their associated objects) that reside on federated data sources, or on DB2 Universal Database for z/OS and OS/390, DB2 Universal Database for iSeries, or DB2 Server for VSE & VM. v The -tw option is only supported via the CLP. When requesting DDL on systems using the database partitioning feature, a warning message will be displayed in place of the DDL for table spaces that exist on inactive database partitions. To ensure proper DDL is produced for all table spaces all database partitions must be activated. Related concepts: v Statistics for modeling production databases in Performance Guide Related reference: v LIKE predicate in SQL Reference, Volume 1
154
Command Reference
-l log-file
Command parameters: -q Signifies that the query is to list installed DB2 products and features. By default, only the visible components (features) are displayed unless the -a parameter is also specified. -f feature-rsp-file-ID Queries for the specific feature, if it is installed. If it is not installed, the return code from the program is non-zero, otherwise the return code is zero. -a -p Lists all hidden components as well as visible features. The db2ls command only lists visible features by default. Lists products only. This will give a brief list of which products the customer has installed rather than listing the features.
-b base-install-path When using the global db2ls command in /usr/local/bin, you need to specify which directory you are querying. The global db2ls command will simply call the db2ls from that install path and pass in the rest of the parameters. -c Prints the output as a colon-separated list of entries rather than column-based. This allows you to programmatically with this information. The first line of output will be a colon-separated list of tokens to describe each entry. This first line will start with a hash character (#) to make it easy to ignore programmatically.
155
Usage Notes: v At least one DB2 Version 9 product must already be installed for a symbolic link to the db2ls command to be available in /usr/local/bin directory. v The db2ls command is the only method to query a DB2 product. You cannot query DB2 products using Linux or UNIX operating system native utilities such as pkgadd, rpm, SMIT, or swinstall. Any existing scripts containing a native installation utility that you use to interface and query with DB2 installations will need to change. v You cannot use the db2ls command on Windows operating systems. Related reference: v ENV_PROD_INFO administrative view Retrieve information about installed DB2 products in Administrative SQL Routines and Views v db2_deinstall - Uninstall DB2 products or features on page 10
156
Command Reference
db2move dbname action -tc -tn -sn -ts -tf -io -lo -co -l -u -p -aw table-definers table-names schema-names tablespace-names filename import-option load-option copy-option lobpaths userid password
Command parameters: dbname Name of the database. action Must be one of: EXPORT Exports all tables that meet the filtering criteria in options. If no options are specified, exports all the tables. Internal staging information is stored in the db2move.lst file. IMPORT Imports all tables listed in the internal staging file db2move.lst. Use the -io option for IMPORT specific actions. LOAD Loads all tables listed in the internal staging file db2move.lst. Use the -lo option for LOAD specific actions. COPY Duplicates a schema(s) into a target database. Use the -sn option to
157
158
Command Reference
-io
import-option. The default is REPLACE_CREATE. Valid options are: INSERT, INSERT_UPDATE, REPLACE, CREATE, and REPLACE_CREATE.
-lo
load-option. The default is INSERT. Valid options are: INSERT and REPLACE.
-co
When the db2move action is COPY, the following -co follow-on options will be available: TARGET_DB <db name> [USER <userid> USING <password>] Allows the user to specify the name of the target database and the user/password. (The source database user/password can be specified using the existing -p and -u options). The USER/USING clause is optional. If USER specifies a userid, then the password must either be supplied following the USING clause, or if its not specified, then db2move will prompt for the password information. The reason for prompting is for security reasons discussed below. TARGET_DB is a mandatory option for the COPY action. The TARGET_DB cannot be the same as the source database. The ADMIN_COPY_SCHEMA procedure can be used for copying schemas within the same database. The COPY action requires inputting at least one schema (-sn) or one table (-tn or -tf). Running multiple db2move commands to copy schemas from one database to another will result in deadlocks. Only one db2move command should be issued at a time. Changes to tables in the source schema during copy processing may mean that the data in the target schema is not identical following a copy. MODE DDL_AND_LOAD Creates all supported objects from the source schema, and populates the tables with the source table data. This is the default option. DDL_ONLY Creates all supported objects from the source schema, but does not repopulate the tables. LOAD_ONLY Loads all specified tables from the source database to the target database. The tables must already exist on the target.
Chapter 1. System Commands
159
160
Command Reference
-aw
161
v To export all tables created by userid1 or user IDs LIKE us%rid2, and with the name tbname1 or table names LIKE %tbname2, issue:
db2move sample export -tc userid1,us*rid2 -tn tbname1,*tbname2
v To import all tables in the SAMPLE database (LOB paths D:\LOBPATH1 and C:\LOBPATH2 are to be searched for LOB files; this example is applicable to Windows operating systems only), issue:
db2move sample import -l D:\LOBPATH1,C:\LOBPATH2
v To load all tables in the SAMPLE database (/home/userid/lobpath subdirectory and the tmp subdirectory are to be searched for LOB files; this example is applicable to Linux and UNIX-based systems only), issue:
db2move sample load -l /home/userid/lobpath,/tmp
v To import all tables in the SAMPLE database in REPLACE mode using the specified user ID and password, issue:
db2move sample import -io replace -u userid -p password
v To duplicate schema schema1 from source database dbsrc to target database dbtgt, issue:
db2move dbsrc COPY -sn schema1 -co TARGET_DB dbtgt USER myuser1 USING mypass1
v To duplicate schema schema1 from source database dbsrc to target database dbtgt, rename the schema to newschema1 on the target, and map source tablespace ts1 to ts2 on the target, issue:
db2move dbsrc COPY -sn schema1 -co TARGET_DB dbtgt USER myuser1 USING mypass1 SCHEMA_MAP ((schema1,newschema1)) TABLESPACE_MAP ((ts1,ts2), SYS_ANY))
Usage notes: v Loading data into tables containing XML columns is not supported. The workaround is to manually issue the IMPORT or EXPORT commands, or use the db2move -Export and db2move -Import behaviour. If these tables also contain generated always identity columns, data cannot be imported into the tables. v This tool exports, imports, or loads user-created tables. If a database is to be duplicated from one operating system to another operating system, db2move facilitates the movement of the tables. It is also necessary to move all other objects associated with the tables, such as aliases, views, triggers, user-defined functions, and so on. If the import utility with the REPLACE_CREATE option is used to create the tables on the target database, then the limitations outlined in Using import to recreate an exported table are imposed. If unexpected errors are encountered during the db2move import phase when the REPLACE_CREATE option is used, examine the appropriate tabnnn.msg message file and consider that the errors might be the result of the limitations on table creation. v When export, import, or load APIs are called by db2move, the FileTypeMod parameter is set to lobsinfile. That is, LOB data is kept in file separate from the PC/IXF file, for every table. v The LOAD command must be run locally on the machine where the database and the data file reside. When the load API is called by db2move, the
162
Command Reference
Files Required/Generated When Using IMPORT: v Input: db2move.lst tabnnn.ixf tabnnnc.yyy v Output: IMPORT.out tabnnn.msg An output file from the EXPORT action. An output file from the EXPORT action. An output file from the EXPORT action. The summarized result of the IMPORT action. The import message file of the corresponding table.
Files Required/Generated When Using LOAD: v Input: db2move.lst tabnnn.ixf tabnnnc.yyy An output file from the EXPORT action. An output file from the EXPORT action. An output file from the EXPORT action.
Chapter 1. System Commands
163
Files Required/Generated When Using COPY: v Input: None v Output: COPYSCHEMA.msg An output file from the COPY action. COPYSCHEMA.err An output file from the COPY action. LOADTABLE.err An output file from the COPY action. LOADTABLE.msg An output file from the COPY action. These files are timestamped and all files that are generated from one run will have the same timestamp. Related reference: v db2look - DB2 statistics and DDL extraction tool on page 146
164
Command Reference
db2mqlsn - MQ Listener
db2mqlsn - MQ listener
Invokes the asynchronous MQListener to monitor a set of WebSphere MQ message queues, passing messages that arrive on them to configured DB2 stored procedures. It can also perform associated administrative and configuration tasks. MQListener configuration information is stored in a DB2 database and consists of a set of named configurations, including a default. Each configuration is composed of a set of tasks. MQListener tasks are defined by the message queue from which to retrieve messages and the stored procedure to which they will be passed. The message queue description must include the name of the message queue and its queue manager, if it is not the default. Information about the stored procedure must include the database in which it is defined, a user name and password with which to access the database, and the procedure name and schema. On Linux and UNIX operating systems, this utility is located in the DB2DIR/instance directory, where DB2DIR is the location where the current version of the DB2 database product is installed. On Windows operating systems, this utility is located in the DB2PATH\sqllib\bin directory, where DB2PATH is the location where the current version of the DB2 database product is installed. For more information about controlling access to WebSphere MQ objects, refer to the WebSphere MQ System Administration Guide (SC34-6068-00). Authorization: v All options except db2mqlsn admin access the MQListener configuration in the configDB database. The connection is made as configUser or, if no user is specified, an implicit connection is attempted. The user in whose name the connection is made must have EXECUTE privilege on package mqlConfi. v To access MQ objects with the db2mqlsn run and db2mqlsn admin options, the user who executes the program must be able to open the appropriate MQ objects. v To execute the db2mqlsn run option successfully, the dbUser specified in the db2mqlsn add option that created the task must have EXECUTE privilege on the specified stored procedure, and must have EXECUTE privilege on the package mqlRun in the dbName database. Command syntax:
db2mqlsn help command run configuration run parameters add configuration add parameters remove configuration remove parameters show configuration admin admin parameters
configuration:
-configDB configuration database name
165
db2mqlsn - MQ Listener
-configUser user ID
-configPwd password
run parameters:
add parameters:
-inputQueue -procSchema -dbName input queue name -queueManager stored procedure schema -procName -dbUser -mqCoordinated -numInstances queue manager name stored procedure name user ID -dbPwd password
remove parameters:
-inputQueue input queue name -queueManager queue manager name
admin parameters:
-adminQueue admin queue name -adminQueueList namelist of admin queue names -adminCommand shutdown restart
Command parameters: help command Supplies detailed information about a particular command. If you do not give a command name, then a general help message is displayed. configDB configuration database Name of the database that contains the configuration information. configUser user ID configPwd password Authorization information with which to access the configuration database. config configuration name You can group individual tasks into a configuration. By doing this you can run a group of tasks together. If you do not specify a configuration name, then the utility runs the default configuration. run adminQueue admin queue name adminQMgr admin queue manager This is the queue on which the MQListener listens for administration commands. If you do not specify a queue manager,
166
Command Reference
db2mqlsn - MQ Listener
then the utility uses the configured default queue manager. If you do not specify an adminQueue, then the application does not receive any administration commands (such as shut down or restart) through the message queue. add inputQueue input queue name queueManager queue manager name This is the queue on which the MQListener listens for messages for this task. If you do not specify a queue manager, the utility uses the default queue manager configured in WebSphere MQ. procSchema stored procedure schema procName stored procedure name The stored procedure to which MQListener passes the message when it arrives. dbName stored procedure database MQListener passes the message to a stored procedure. This is the database in which the stored procedure is defined. dbUser user ID dbPwd password The user on whose behalf the stored procedure is invoked. mqCoordinated This indicates that reading and writing to the WebSphere MQ message queue should be integrated into a transaction together with the DB2 stored procedure call. The entire transaction is coordinated by the WebSphere MQ coordinator. (The queue manager must also be configured to coordinate a transaction in this way. See the WebSphere MQ documentation for more information.) By default, the message queue operations are not part of the transaction in which the stored procedure is invoked. numInstances number of instances to run The number of duplicate instances of this task to run in this configuration. If you do not specify a value, then only one instance is run. remove inputQueue input queue name queueManager queue manager name This is the queue and queue manager that define the task that will be removed from the configuration. The combination of input queue and queue manager is unique within a configuration. admin adminQueue admin queue name adminQueueList namelist of admin queue names adminQMgr admin queue manager The queue or namelist of queue names on which to send the admin command. If you do not specify a queue manager, the utility uses the default queue manager that is configured in WebSphere MQ. adminCommand admin command Submits a command. The command can be either shutdown or restart. Shutdown causes a running MQListener to exit when the listener finishes processing the current message. Restart performs a shutdown, and then reads the configuration again and restarts. Examples:
db2mqlsn show -configDB sampleDB -config nightlies
Chapter 1. System Commands
167
db2mqlsn - MQ Listener
db2mqlsn add -configDB sampleDB -config nightlies -inputQueue app3 -procSchema imauser -procName proc3 -dbName aDB -dbUser imauser -dbPwd aSecret db2mqlsn run -configDB -config nightlies
168
Command Reference
Command parameters: -f:input_file Specifies the DB2MSCS.CFG input file to be used by the MSCS utility. If this parameter is not specified, the DB2MSCS utility reads the DB2MSCS.CFG file that is in the current directory. -u:instance_name This option allows you to undo the db2mscs operation and revert the instance back to the non-MSCS instance specified by instance_name. Usage notes: The DB2MSCS utility is a standalone command line utility used to transform a non-MSCS instance into an MSCS instance. The utility will create all MSCS groups, resources, and resource dependencies. It will also copy all DB2 information stored in the Windows registry to the cluster portion of the registry as well as moving the instance directory to a shared cluster disk. The DB2MSCS utility takes as input a configuration file provided by the user specifying how the cluster should be set up. The DB2MSCS.CFG file is an ASCII text file that contains parameters that are read by the DB2MSCS utility. You specify each input parameter on a separate line using the following format: PARAMETER_KEYWORD=parameter_value. For example:
CLUSTER_NAME=FINANCE GROUP_NAME=DB2 Group IP_ADDRESS=9.21.22.89
Two example configuration files can be found in the CFG subdirectory under the DB2 install directory. The first, DB2MSCS.EE, is an example for single-partition database environments. The second, DB2MSCS.EEE, is an example for partitioned database environments. The parameters for the DB2MSCS.CFG file are as follows: DB2_INSTANCE The name of the DB2 instance. This parameter has a global scope and should be specified only once in the DB2MSCS.CFG file. DAS_INSTANCE The name of the DB2 Admin Server instance. Specify this parameter to
169
170
Command Reference
171
172
Command Reference
Command Parameters: -i -d -p -m -w -r interval Number of seconds to wait between subsequent calls to the memory tracker (in repeat mode). count Number of times to repeat.
Chapter 1. System Commands
Show instance level memory. Show database level memory. Show private memory. Show maximum values for each pool. Show high watermark values for each pool. Repeat mode
173
Examples: The following call returns database and instance normal values and repeats every 10 seconds:
db2mtrk -i -d -v -r 10
Consider the following output samples: The command db2mtrk -i -d -p displays the following output:
Tracking Memory on: 2006/01/17 at 15:24:38 Memory for instance monh other 576.0K 8.0M Memory for database: AJSTORM utilh 64.0K pckcacheh catcacheh bph (1) bph (S32K) bph (S16K) bph (S8K) 640.0K 128.0K 34.2M 576.0K 320.0K 192.0K lockh 9.6M dbh 4.8M other 192.0K
Memory for database: CMGARCIA utilh 64.0K pckcacheh catcacheh bph (1) bph (S32K) bph (S16K) bph (S8K) 640.0K 128.0K 34.2M 576.0K 320.0K 192.0K lockh 9.6M dbh 4.8M other 192.0K
bph (S4K) shsorth 128.0K 64.0K Memory for agent 970830 other 64.0K apph 64.0K
appctlh 64.0K
Memory for agent 4460644 other 64.0K appctlh 64.0K apph 64.0K
174
Command Reference
Usage Notes: Notes: 1. When no flags are specified, usage is returned. 2. One of the -d, -h, -i, or -p flag must be specified. 3. When the -p flag is specified, detailed private memory usage information is returned ordered by agent ID. 4. The Other Memory reported is the memory associated with the overhead of operating the database management system. 5. In some cases (such as the package cache) the maximum size displayed will be larger than the value assigned to the configuration parameter. In such cases, the value assigned to the configuration parameter is used as a soft limit, and the pools actual memory usage might grow beyond the configured size. 6. For the buffer pool heaps, the number specified in the parentheses is either the buffer pool ID, or indicates that this buffer pool is one of the system buffer pools. 7. The maximum size that the memory tracker reports for some heaps is the amount of physical memory on the machine. These heaps are called unbounded heaps and are declared with an unlimited maximum size because when the heaps are declared, it is not clear how much memory they will require at peak times. Although these heaps are not strictly bounded by the physical memory on the machine, they are reported as the maximum size because it is a reasonable approximation. Related concepts: v Memory allocation in DB2 in Performance Guide
175
/u: user,password
/p: logical_port
/h: hostname
/m: machine_name
/g: network_name
Command parameters: /n:dbpartitionnum Specifies the database partition number of the database partition servers configuration that is to be changed. /i:instance_name Specifies the instance in which this database partition server participates. If a parameter is not specified, the default is the current instance. /u:username,password Specifies the user name and password. If a parameter is not specified, the existing user name and password will apply. /p:logical_port Specifies the logical port for the database partition server. This parameter must be specified to move the database partition server to a different machine. If a parameter is not specified, the logical port number will remain unchanged. /h:host_name Specifies TCP/IP host name used by FCM for internal communications. If this parameter is not specified, the host name will remain the same. /m:machine_name Specifies the machine where the database partition server will reside. The database partition server can only be moved if there are no existing databases in the instance. /g:network_name Changes the network name for the database partition server. This parameter can be used to apply a specific IP address to the database partition server when there are multiple IP addresses on a machine. The network name or the IP address can be entered.
176
Command Reference
Related reference: v db2ncrt - Add database partition server to an instance on page 178 v db2ndrop - Drop database partition server from an instance on page 180
177
/i: instance_name
/m: machine_name
/p: logical_port
/h: host_name
/g: network_name
/o: instance_owning_machine
Command parameters: /n:dbpartitionnum A unique database partition number which identifies the database partition server. The number entered can range from 1 to 999. /u:domain_name\username,password Specifies the domain, logon account name and password for DB2. /i:instance_name Specifies the instance name. If a parameter is not specified, the default is the current instance. /m:machine_name Specifies the computer name of the Windows workstation on which the database partition server resides. This parameter is required if a database partition server is added on a remote computer. /p:logical_port Specifies the logical port number used for the database partition server. If this parameter is not specified, the logical port number assigned will be 0.
178
Command Reference
Related reference: v db2nchg - Change database partition server configuration on page 176 v db2ndrop - Drop database partition server from an instance on page 180
179
Command parameters: /n:dbpartitionnum A unique database partition number which identifies the database partition server. /i:instance_name Specifies the instance name. If a parameter is not specified, the default is the current instance. Examples:
db2ndrop /n:2 /i=KMASCI
Usage notes: If the instance-owning database partition server (dbpartitionnum 0) is dropped from the instance, the instance becomes unusable. To drop the instance, use the db2idrop command. This command should not be used if there are databases in this instance. Instead, the db2stop drop nodenum command should be used. This ensures that the database partition server is correctly removed from the partition database system. It is also possible to drop a database partition server in an instance where a database exists. The db2nodes.cfg file should not be edited since changing the file might cause inconsistencies in the partitioned database system. To drop a database partition server that is assigned to the logical port 0 from a machine that is running multiple logical database partition servers, all other database partition servers assigned to the other logical ports must be dropped first. Each database partition server must have a database partition server assigned to logical port 0. Related reference: v db2nchg - Change database partition server configuration on page 176 v db2ncrt - Add database partition server to an instance on page 178
180
Command Reference
# # # # # # # # #
Client only Compare to current Help screen List current Specify memory in GB Specify number of CPUs Msg Q performance level (0-3) Scale factor (1-3) Number of threads
Command parameters: -c -f The -c switch is for client only installations. This option is available only on DB2 for the Solaris operating system. The -f switch can be used to compare the current kernel parameters with the values that would be recommended by the db2osconf utility. The -f option is the default if no other options are entered with the db2osconf command. On the Solaris operating system, only the kernel parameters that differ will be displayed. Since the current kernel parameters are taken directly from the live kernel, they might not match those in /etc/system, the Solaris system specification file. If the kernel parameters from the live kernel are different than those listed in the /etc/system, the /etc/system file might have been changed without a reboot or there might be a syntax error in the file. On HP-UX, the -f option returns a list of recommended parameters and a list of recommended changes to parameter values:
****** Please Change the Following in the Given Order ****** WARNING [<parameter name>] should be set to <value>
-l -m
The -l switch lists the current kernel parameters. The -m switch overrides the amount of physical memory in GB. Normally, the db2osconf utility determines the amount of physical memory automatically. This option is available only on DB2 for the Solaris operating system. The -n switch overrides the number of CPUs on the system. Normally, the db2osconf utility determines the number of CPUs automatically. This option is available only on DB2 for the Solaris operating system.
-n
181
-s
-t
182
Command Reference
Total kernel space for IPC: 0.35MB (shm) + 1.77MB (sem) + 1.34MB (msg) == 3.46MB (total)
The recommended values for set semsys:seminfo_semume and set shmsys:shminfo_shmseg were the additional values provided by running db2osconf -t 500. Usage notes: Even though it is possible to recommend kernel parameters based on a particular DB2 workload, this level of accuracy is not beneficial. If the kernel parameter values are too close to what are actually needed and the workload changes in the future, DB2 might encounter a problem due to a lack of interprocess communication (IPC) resources. A lack of IPC resources can lead to an unplanned outage for DB2 and a reboot would be necessary in order to increase kernel parameters. By setting the kernel parameters reasonably high, it should reduce or eliminate the need to change them in the future. The amount of memory consumed by the kernel parameter recommendations is almost trivial compared to the size of the system. For example, for a system with 4GB of RAM and 4 CPUs, the amount of memory for the recommended kernel parameters is 4.67MB or 0.11%. This small fraction of memory used for the kernel parameters should be acceptable given the benefits. On the Solaris operating system, there are two versions of the db2osconf utility: one for 64-bit kernels and one for 32-bit kernels. The utility needs to be run as root or with the group sys since it accesses the following special devices (accesses are read-only):
crw-r----crw-rw-rwcrw-r----1 root 1 root 1 root sys sys sys 13, 1 Jul 19 18:06 /dev/kmem 72, 0 Feb 19 1999 /dev/ksyms 13, 0 Feb 19 1999 /dev/mem
Related tasks: v Modifying kernel parameters (Solaris Operating Environment) in Quick Beginnings for DB2 Servers Related reference: v Recommended kernel configuration parameters (HP-UX) in Quick Beginnings for DB2 Servers
183
-database database
-alldatabases
-file filename
-everything
-command filename
-interactive
-full
-hadr
-utilities
184
Command Reference
-memblocks dbms fcm fmp appctl <id> all top blocks sort PoolID pid=<pid> private database database alldatabases -inst
185
bpID
Command parameters: -inst -help Returns all instance-scope information. Displays the online help information.
-version Displays the current version and service level of the installed DB2 product. -dbpartitionnum num Specifies that the command is to run on the specified database partition server. -alldbpartitionnums Specifies that this command is to run on all active database partition servers in the instance. db2pd will only report information from database partition servers on the same physical machine that db2pd is being run on. -database database Specifies that the command attaches to the database memory sets of the specified database.
186
Command Reference
-utilities Reports utility information. Descriptions of each reported element can be found in the utilities section of the System Monitor Guide and Reference. -repeat num sec count Specifies that the command is to be repeated after the specified number of seconds. If a value is not specified for the number of seconds, the command repeats every five seconds. You can also specify the number of times the output will be repeated. If you do not specify a value for count, the command is repeated until it is interrupted. -applications Returns information about applications. If an application ID is specified, information is returned about that application. If an agent ID is specified, information is returned about the agent that is working on behalf of the application. -fmp -agents Returns information about agents. If an agent ID is specified, information is returned about the agent. If an application ID is specified, information is returned about all the agents that are performing work for the application. Specify this option with the -inst option, if you have chosen a database that you want scope output for. -transactions Returns information about active transactions. If a transaction handle is specified, information is returned about that transaction handle.
Chapter 1. System Commands
Returns information about the process in which the fenced routines are executed.
187
-locks Returns information about the locks. Specify a transaction handle to obtain information about the locks that are held by a specific transaction. Specify this option with the showlocks option to return detailed information about lock names. For row and block locks on partitioned tables and individual data partitions, showlocks displays the data partition identifier as part of the row with the lock information. Specify the wait option to return locks in a wait state and the owners of those locks. -tablespaces Returns information about the table spaces. Specify this option with the group option to display the information about the containers of a table space grouped with the table space. Specify this option with the tablespace option to display the information about a specific table space and its containers. -dynamic Returns information about the execution of dynamic SQL. -static Returns information about the execution of static SQL and packages. -fcm Returns information about the fast communication manager. v Specify this option with the -inst option, if you have chosen a database for which you want scope output. v Specify this option with the hwm option, to retrieve high-watermark consumptions of FCM buffers and channels by applications since the start of the DB2 instance. The high-watermark consumption values of applications are retained even if they have disconnected from the database already. v Specify this option with the numApps option, to limit the maximum number of applications that the db2pd command reports in the current and HWM consumption statistics.
-memsets Returns information about the memory sets. Specify this option with the -inst option to include all the instance-scope information in the returned information. -mempools Returns information about the memory pools. Specify this option with the -inst option to include all the instance-scope information in the returned information. -memblocks Returns information about the memory pools.
188
Command Reference
189
Use the db2pd command from the command line in the following way to obtain information about agents that are servicing client requests. In this case, the DB2PDOPT environment variable is set with the -agents parameter before invoking the db2pd command. The command uses the information set in the environment variable when it executes.
export DB2PDOPT="-agents" db2pd
Use the db2pd command from the command line in the following way to obtain information about agents that are servicing client requests. In this case, the -agents parameter is set in the file file.out before invoking the db2pd command. The -command parameter causes the command to use the information in the file.out file when it executes.
echo "-agents" > file.out db2pd -command file.out
Use the db2pd command from the command line in the following way to obtain all database and instance-scope information:
db2pd -inst -alldbs
Usage notes: The following sections describe the output produced by the different db2pd parameters. v -applications on page 191 v -fmp on page 191 v -agents on page 192
190
Command Reference
-applications parameter: For the -applications parameter, the following information is returned: ApplHandl The application handle, including the node and the index. NumAgents The number of agents that are working on behalf of the application. CoorPid The process ID of the coordinator agent for the application. Status The status of the application. Appid The application ID. -fmp parameter: For the -fmp parameter, the following information is returned: v Pool Size - Current number of FMP processes in the FMP pool. v Max Pool Size - Maximum number of FMP process in the FMP pool. v Keep FMP - Value of KEEPFENCED database manager configuration parameter. v Initialized - FMP is initialized. Possible values are Yes and No. v Trusted Path - Path of trusted procedures v Fenced User - Fenced user ID FMP Process: v FmpPid - Process ID of the FMP process. v Bit - Bit mode. Values are 32 bit or 64 bit. v Flags - State flags for the FMP process. Possible values are: 0x00000000 - JVM initialized 0x00000002 - Is threaded
Chapter 1. System Commands
191
0x00000400 - JVM initialized for debugging 0x00000800 - Termination flag v ActiveTh - Number of active threads running in the fmp process. v PooledTh - Number of pooled threads held by the fmp process. v Active - Active state of the fmp process. Values are Yes or No. Active Threads: v FmpPid - FMP process ID that owns the active thread. v EduPid - EDU process ID that this thread is working. v ThreadId - Active thread ID. Pooled Threads: v FmpPid - FMP process ID that owns the pooled thread. v ThreadId - Pooled thread ID. -agents parameter: For the -agents parameter, the following information is returned: AppHandl The application handle, including the node and the index. AgentPid The process ID of the agent process. Priority The priority of the agent. Type State The type of agent. The state of the agent.
ClientPid The process ID of the client process. Userid The user ID running the agent. ClientNm The name of the client process. Rowsread The number of rows that were read by the agent. Rowswrtn The number of rows that were written by the agent. LkTmOt The lock timeout setting for the agent. -transactions parameter:
192
Command Reference
Tflag2 Transaction flag 2. The possible values are: v 0x00000004. The transaction has exceeded the limit specified by the num_log_span database configuration parameter. v 0x00000008. The transaction resulted because of the running of a DB2 utility. v 0x00000020. The transaction will cede its locks to an application with a higher priority (this value ordinarily occurs for jobs that the DB2 database system automatically starts for self tuning and self management). v 0x00000040. The transaction will not cede its row-level locks to an application with a higher priority (this value ordinarily occurs for jobs that the DB2 database system automatically starts for self-tuning and self-management) Firstlsn First LSN of the transaction. Lastlsn Last LSN of the transaction. LogSpace The amount of log space that is reserved for the transaction.
193
AxRegCnt The number of applications that are registered for a global transaction. For local transactions, the value is 1. GXID Global transaction ID. For local transactions, the value is 0. -bufferpools parameter: For the -bufferpools parameter, the following information is returned: First Active Pool ID The ID of the first active buffer pool. Max Bufferpool ID The maximum ID of all active buffer pools. Max Bufferpool ID on Disk The maximum ID of all buffer pools defined on disk. Num Bufferpools The number of available buffer pools. ID The ID of the buffer pool.
Name The name of the buffer pool. PageSz The size of the buffer pool pages. PA-NumPgs The number of pages in the page area of the buffer pool. BA-NumPgs The number of pages in the block area of the buffer pool. This value is 0 if the buffer pool is not enabled for block-based I/O. BlkSize The block size of a block in the block area of the buffer pool. This value is 0 if the buffer pool is not enabled for block-based I/O. NumTbsp The number of table spaces that are using the buffer pool. PgsLeft The number of pages left to remove in the buffer pool if its size is being decreased. CurrentSz The current size of the buffer pool in pages. PostAlter The size of the buffer pool in pages when the buffer pool is restarted. SuspndTSCt The number of table spaces mapped to the buffer pool that are currently I/O suspended. If 0 is returned for all buffer pools, the database I/O is not suspended.
194
Command Reference
195
196
Command Reference
197
Pages The number of pages in the log. Filename The file name of the log. -locks parameter: For the -locks parameter, the following information is returned: TranHdl The transaction handle that is requesting the lock. Lockname The name of the lock. Type The type of lock. The possible values are: v Row v Pool v Partition v Table v AlterTab v ObjectTab v OnlBackup v DMS Seq v Internal P v Internal V v Key Value v No Lock v Block Lock v LOG Release v LF Release v LFM File v LOB/LF 4K v APM Seq v Tbsp Load v Table Part v DJ UserMap v DF NickNm v CatCache
198
Command Reference
Owner
HoldCount The number of holds placed on the lock. Locks with holds are not released when transactions are committed. Att The attributes of the lock. Possible values are: v 0x01 Wait for availability. v 0x02 Acquired by escalation. v 0x04 RR lock "in" block. v 0x08 Insert Lock. v 0x10 Lock by RR scan. v 0x20 Update/delete row lock. v 0x40 Allow new lock requests. v 0x80 A new lock requestor.
ReleaseFlg The lock release flags. Possible values are: v 0x80000000 Locks by SQL compiler. v 0x40000000 Non-unique, untracked locks. -tablespaces parameter: For the -tablespaces parameter, the output is organized into four segments: Table space Configuration: Id Type The table space ID. The type of table space. The possible values are: v SMS v DMS
199
Prefetch The number of pages read from the table space for each range prefetch request. BufID The ID of the buffer pool that this table space is mapped to. BufIDDisk The ID of the buffer pool that this table space will be mapped to at next startup. FSC File system caching, indicates whether buffered I/O was specified by the user at CREATE/ALTER TABLESPACE time. The possible values are: v Yes v No
NumCntrs The number of containers owned by a table space. MaxStripe The maximum stripe set currently defined in the table space (applicable to DMS table spaces only). LastConsecPg The last consecutive object table extent. Name The name of the table space. Table space Statistics: Id The table space ID.
TotalPages For DMS table spaces, the sum of the gross size of each of the table spaces containers (reported in the total pages field of the container). For SMS table spaces, this value reflects the number of pages in the filesystem owned by the table space. UsablePgs For DMS table spaces, the sum of the net size of each of the table spaces containers (reported in the usable pages field of the container). For SMS table spaces, this value reflects the number of pages in the filesystem owned by the table space.
200
Command Reference
201
InitSize For automatic storage table spaces, the value of this parameter is the initial size of the table space in bytes. IncSize For automatically resized table spaces, if the value of the IIP field is No, the value of this parameter is the size, in bytes, that the table space will automatically be increased by (per database partition) when the table space is full and a request for space is made. If the value of the IIP field is Yes, the value of this parameter is a percentage. IIP For automatically resized table spaces, the value of this parameter indicates whether the increment value in the IncSize field is a percent or not. The possible values are: v Yes v No
MaxSize For automatically resized table spaces, the value of this parameter specifies the maximum size, in bytes, to which the table space can automatically be increased (per database partition). A value of NONE indicates that there is no maximum size. LastResize The timestamp of the last successful automatic resize operation. LRF Last resize failed indicates whether the last automatic resizing operation was successful or not. The possible values are: v Yes v No
Table space Containers: TspId The ID of the table space that owns the container. ContainNum The number assigned to the container in the table space. Type The type of container. The possible values are: v Path v Disk v File v Striped Disk v Striped File
TotalPgs The number of pages in the container. UsablePgs The number of usable pages in the container. StripeSet The stripe set where the container resides (applicable to DMS table spaces only). Container The name of the container.
202
Command Reference
Dynamic SQL Environments: AnchID The hash anchor identifier. StmtID The statement identifier. EnvID The environment identifier. Iso The isolation level of the environment.
203
Dynamic SQL Variations: AnchID The hash anchor identifier. StmtID The statement identifier for this variation. EnvID The environment identifier for this variation. VarID The variation identifier. NumRef The number of times this variation has been referenced. Typ The internal statement type value for the variation section.
Lockname The variation lockname. -static parameter: For the -static parameter, the following information is returned: Static Cache: Current Memory Used The number of bytes used by the package cache. Total Heap Size The number of bytes internally configured for the package cache. Cache Overflow flag state A flag to indicate whether the package cache is in an overflow state. Number of References The number of references to packages in the package cache. Number of Package Inserts The number of package inserts into the package cache. Number of Section Inserts The number of static section inserts into the package cache. Packages: Schema The qualifier of the package. PkgName The name of the package. Version The version identifier of the package. UniqueID The consistency token associated with the package. NumSec The number of sections that have been loaded. UseCount The usage count of the cached package.
204
Command Reference
QOpt The query optimization of the package. Blk The blocking factor of the package.
Lockname The lockname of the package. Sections: Schema The qualifier of the package that the section belongs to. PkgName The package name that the section belongs to. UniqueID The consistency token associated with the package that the section belongs to. SecNo The section number. NumRef The number of times the cached section has been referenced. UseCount The usage count of the cached section. StmtType The internal statement type value for the cached section. Cursor The cursor name (if applicable). W-Hld Indicates whether the cursor is a WITH HOLD cursor. -fcm parameter: For the -fcm parameter, the following information is returned: FCM Usage Statistics: Total Buffers Total number of buffers, including all free and in-use ones. Free Buffers Number of free buffers. Buffers LWM Lowest number of free buffers. Total Channels Total number of channels, including all free and in-use ones. Free Channels Number of free channels. Channels LWM Lowest number of free channels. Total Sessions Total number of sessions, including all free and in-use ones.
Chapter 1. System Commands
205
206
Command Reference
Size(Kb) The size of the memory set in kilobytes. Key DBP Type The memory set key (for UNIX based systems only). The database partition server that owns the memory set. The type of memory set.
Unrsv(Kb) Memory not reserved for any particular pool. Any pool in the set can use this memory if needed. Used(Kb) Memory currently allocated to memory pools. Cmt(Kb) All memory that has been committed by the DB2 database, and occupies physical RAM, pageing space, or both. Uncmt(Kb) Memory not currently being used, and marked by the DB2 database to be uncommitted. Depending on the operating system, this memory could occupy physical RAM, pageing space, or both. -mempools parameter: For the -mempools parameter, the following information is returned (All sizes are specified in bytes.): MemSet The memory set that owns the memory pool. PoolName The name of the memory pool. Id The memory pool identifier.
207
Sorted totals reported for each memory pool: PoolID The memory pool id that owns the memory block. PoolName The memory pool name that owns the memory block.
208
Command Reference
Sorted totals reported for each memory set: PoolID The memory pool id that owns the memory block. PoolName The memory pool name that owns the memory block. TotalSize The total size of blocks (in bytes) allocated from the same line of code and file. %Bytes The percentage bytes allocated from the same line of code and file. TotalCount The number of blocks allocated from the same line of code and file. %Count The percentage count allocated from the same line of code and file. LOC File Line of code that allocated the memory block. Filename hash value from where the block was allocated.
-dbmcfg parameter: For the -dbmcfg parameter, current values of the database manager configuration parameters are returned. -dbcfg parameter: For the -dbcfg parameter, the current values of the database configuration parameters are returned. -catalogcache parameter: For the -catalogcache parameter, the following information is returned: Catalog Cache: Configured Size The number of bytes as specified by the catalogcache_sz database configuration parameter. Current Size The current number of bytes used in the catalog cache. Maximum Size The maximum amount of memory that is available to the cache (up to the maximum database global memory).
209
TableID The table identifier. TbspaceID The identifier of the table space where the table resides. LastRefID The last process identifier that referenced the table. CatalogCache LoadingLock The name of the catalog cache loading lock for the cache entry. CatalogCache UsageLock The name of the usage lock for the cache entry. Sts The status of the entry. The possible values are: v V (valid). v I (invalid).
SYSRTNS: RoutineID The routine identifier. Schema The schema qualifier of the routine. Name The name of the routine. LastRefID The last process identifier that referenced the routine. CatalogCache LoadingLock The name of the catalog cache loading lock for the cache entry. CatalogCache UsageLock The name of the usage lock for the cache entry. Sts The status of the entry. The possible values are: v V (valid). v I (invalid).
SYSRTNS_PROCSCHEMAS: RtnName The name of the routine. ParmCount The number of parameters in the routine. LastRefID The last process identifier that referenced the PROCSCHEMAS entry.
210
Command Reference
SYSDATATYPES: TypID The type identifier. LastRefID The last process identifier that referenced the type. CatalogCache LoadingLock The name of the catalog cache loading lock for the cache entry. CatalogCache UsageLock The name of the usage lock for the cache entry. Sts The status of the entry. The possible values are: v V (valid). v I (invalid).
SYSCODEPROPERTIES: LastRefID The last process identifier to reference the SYSCODEPROPERTIES entry. CatalogCache LoadingLock The name of the catalog cache loading lock for the cache entry. CatalogCache UsageLock The name of the usage lock for the cache entry. Sts The status of the entry. The possible values are: v V (valid). v I (invalid).
SYSNODEGROUPS: PMapID The distribution map identifier. RBalID The identifier if the distribution map that was used for the data redistribution. CatalogCache LoadingLock The name of the catalog cache loading lock for the cache entry. CatalogCache UsageLock The name of the usage lock for the cache entry. Sts The status of the entry. The possible values are: v V (valid). v I (invalid).
211
Location Name The unique name of the database server. Count The number of entries found in the list of servers. IP Address The IP address of the server Port Priority The normalized Workload Manager (WLM) weight. Connections The number of active connections to this server. Status The status of the connection. The possible values are: v 0. Healthy. v 1. Unhealthy. The server is in the list but a connection cannot be established. This entry currently is not considered when establishing connections. v 2. Unhealthy. The server was previously unavailable, but currently it will be considered when establishing connections. PRDID The product identifier of the server as of the last connection. -tcbstats parameter: The IP port being used by the server.
212
Command Reference
PgReorgs The number of page reorganizations performed. NoChgUpdts The number of updates that did not change any columns in the table.
213
TCB Index Information: InxTbspace The table space where the index resides. ObjectID The object identifier of the index. TbspaceID The table space identifier. TableID The table identifier. MasterTbs For partitioned tables, this is the logical table space identifier to which the partitioned table belongs. For non-partitioned tables, this value corresponds to the TbspaceID. MasterTab For partitioned tables, this is the logical table identifier of the partitioned table. For non-partitioned tables, this value corresponds to the TableID. TableName The name of the table. SchemaNm The schema that qualifies the table name. IID The index identifier.
IndexObjSize The number of pages in the index object. TCB Index Stats: TableName The name of the table. IID The index identifier.
214
Command Reference
215
IndexID The identifier of the index that is being used to reorganize the table. TempSpaceID The table space in which the table is being reorganized. Table Reorg Stats: TableName The name of the table. Start End The time that the table reorganization started. The time that the table reorganization ended.
PhaseStart The start time for a phase of table reorganization. MaxPhase The maximum number of reorganization phases that will occur during the reorganization. This value only applies to offline table reorganization. Phase The phase of the table reorganization. This value only applies to offline table reorganization. The possible values are: v Sort v Build v Replace v InxRecreat CurCount A unit of progress that indicates the amount of table reorganization that has been completed. The amount of progress represented by this value is relative to the value of MaxCount, which indicates the total amount of work required to reorganize the table. MaxCount A value that indicates the total amount of work required to reorganize the table. This value can be used in conjunction with CurCount to determine the progress of the table reorganization. Status The status of an online table reorganization. This value does not apply to offline table reorganizations. The possible values are: v Started v Paused v Stopped v Done v Truncat Completion The success indicator for the table reorganization. The possible values are: v 0. The table reorganization completed successfully. v -1. The table reorganization failed.
216
Command Reference
TotWkUnits The total number of units of work (UOW) to be done for this phase of the recovery operation.
217
-osinfo parameter: For the -osinfo parameter, the following information is returned: CPU information: (On Windows, AIX, HP-UX, Solaris and Linux operating systems) TotalCPU Total number of CPUs. OnlineCPU Number of CPUs online. ConfigCPU Number of CPUs configured. Speed(MHz) Speed, in MHz, of CPUs.
218
Command Reference
Reserved Amount of reserved virtual memory in megabytes. Available Amount of virtual memory available in megabytes. Free Amount of virtual memory free in megabytes.
Operating system information (On Windows, AIX, HP-UX, Solaris and Linux operating systems) OSName Name of the operating system software. NodeName Name of the system. Version Version of the operating system. Machine Machine hardware identification. Message queue information (On AIX, HP-UX, and Linux operating systems) MsgSeg System-wide total of SysV msg segments. MsgMax System-wide maximum size of a message.
219
220
Command Reference
DeviceType Device type. FSName File system name. MountPoint Mount point of the file system. -storagepaths parameter: For the -storagepaths parameter, the following information is returned: Number of Storage Paths The number of automatic storage paths defined for the database. PathName The name of an automatic storage path defined for the database. -pages parameter: For the -pages parameter, the following information is returned for each page: BPID Bufferpool ID that contains the page.
TbspaceID Table space ID that contains the page. TbspacePgNum Logical page number within the table space (DMS only). ObjID Object ID that contains the page. ObjPgNum Logical page number within the object. ObjClass Class of object contained in the page. Possible values are Perm, Temp, Reorg, Shadow, and EMP. ObjType Type of object contained in the page. Possible values are Data, Index, LongField, XMLData, SMP, LOB, LOBA, and MDC_BMP.
Chapter 1. System Commands
221
Prefetched Indicates if the page has been prefetched. Possible values are Y and N. Related tasks: v Identifying the owner of a lock that is being waited on in Troubleshooting Guide Related reference: v GET DATABASE CONFIGURATION on page 457 v GET DATABASE MANAGER CONFIGURATION on page 463 v db2pdcfg - Configure DB2 database for problem determination behavior on page 223 v SYSCAT.ROUTINES catalog view in SQL Reference, Volume 1 v SYSCAT.TABLES catalog view in SQL Reference, Volume 1
222
Command Reference
db2pdcfg
dbmcfg xml=<0,1>
Command parameters: -catch Instructs the database manager to catch an error or warning. v Specify this option with the clear option to clear any catch flags that are set. v Specify this option with the status option to clear any catch flags that are set. v Specify this option with the <errorCode> option to clear any catch flags that are set. Possible errorCodes are: <sqlCode>[,<reasonCode>] ZRC (hex or integer) ZRC #define (such as SQLP_LTIMEOUT)
Chapter 1. System Commands
223
db2pdcfg
ECF (hex or integer) deadlock or locktimeout v Specify this option with the <action> option to set the desired action when the error or warning is caught by the database manager. Possible actions are: [stack] (default) - Produce stack trace in db2diag.log [db2cos] (default) - Run sqllib/db2cos callout script [stopdb2trc] - Stop db2trc [dumpcomponent] - Dump component flag [component=<componentID>] - Component ID [lockname=<lockname>] - Lockname for catching specific lock (lockname=000200030000001F0000000052) [locktype=<locktype>] - Locktype for catching specific lock (locktype=R or locktype=52) v Specify this option with the count=<count> option to instruct the database manager the number of times to execute db2cos during a database manager trap. The default is 255. -cos Instructs the database manager how to invoke the db2cos callout script upon a database manager trap. v Specify this option with the status option to print the status. v Specify this option with the off option to turn off the database manager call to db2cos during a database manager trap. v Specify this option with the on option to turn on the database manager call to db2cos during a database manager trap. v Specify this option with the sleep=<numsec> to instruct the database manager how long to sleep between checking the size of the output file generated by db2cos. The default is 3 seconds. v Specify this option with the timeout=<numsec> option to instruct the database manager how long of a timeout when checking if the output file generated by db2cos is growing in size. The default is 30 seconds. v Specify this option with the count=<count> option to instruct the database manager the number of times to execute db2cos during a database manager trap. The default is 255. v Specify this option with the SQLO_SIG_DUMP option to instruct the database manager to execute db2cos when receiving SQLO_SIG_DUMP signal.
-dbmcfg Sets DBM Config Reserved Bitmap to values 0 (default) or 1 (instance has xml data). This option is password protected which can be obtained from IBM DB2 Service. -dbcfg xml=<0,1> Sets Database Config Reserved Bitmap to values 0 (default) or 1 (database has xml data). This option is password protected which can be obtained from IBM DB2 Service. Related concepts: v db2cos (callout script) output files in Troubleshooting Guide Related reference:
224
Command Reference
db2pdcfg
v db2pd - Monitor and troubleshoot DB2 database on page 184
225
db2perfc -d dbalias
Command parameters: -d Specifies that performance values for DCS databases should be reset.
dbalias Specifies the databases for which the performance values should be reset. If no databases are specified, the performance values for all active databases will be reset.. Usage notes: When an application calls the DB2 monitor APIs, the information returned is normally the cumulative values since the DB2 server was started. However, it is often useful to reset performance values, run a test, reset the values again, and then rerun the test. The program resets the values for all programs currently accessing database performance information for the relevant DB2 server instance (that is, the one held in db2instance in the session in which you run db2perfc). Invoking db2perfc also resets the values seen by anyone remotely accessing DB2 performance information when the command is executed. The db2ResetMonitor API allows an application to reset the values it sees locally, not globally, for particular databases. Examples: The following example resets performance values for all active DB2 databases:
db2perfc
The following example resets performance values for specific DB2 databases:
db2perfc dbalias1 dbalias2
The following example resets performance values for all active DB2 DCS databases:
db2perfc -d
The following example resets performance values for specific DB2 DCS databases:
226
Command Reference
Related reference: v db2ResetMonitor API - Reset the database system monitor data in Administrative API Reference
227
Command parameters: -i -u Registers the DB2 performance counters. Deregisters the DB2 performance counters.
Usage notes: The db2perfi -i command will do the following: 1. Add the names and descriptions of the DB2 counter objects to the Windows registry. 2. Create a registry key in the Services key in the Windows registry as follows:
HKEY_LOCAL_MACHINE \System \CurrentControlSet \Services \DB2_NT_Performance \Performance Library=Name of the DB2 performance support DLL Open=Open function name, called when the DLL is first loaded Collect=Collect function name, called to request performance information Close=Close function name, called when the DLL is unloaded
Related tasks: v Registering DB2 with the Windows performance monitor in Administration Guide: Implementation Related reference: v db2perfc - Reset database performance values on page 226 v db2perfr - Performance monitor registration tool on page 229
228
Command Reference
Command parameters: -r -u Registers the user name and password. Deregisters the user name and password.
Usage notes: v Once a user name and password combination has been registered with DB2, even local instances of the Performance Monitor will explicitly log on using that user name and password. This means that if the user name information registered with DB2 does not match, local sessions of the Performance Monitor will not show DB2 performance information. v The user name and password combination must be maintained to match the user name and password values stored in the Windows security database. If the user name or password is changed in the Windows security database, the user name and password combination used for remote performance monitoring must be reset. v The default Windows Performance Monitor user name, SYSTEM, is a DB2 reserved word and cannot be used. Related tasks: v Enabling remote access to DB2 performance information in Administration Guide: Implementation Related reference: v db2perfc - Reset database performance values on page 226 v db2perfi - Performance counters registration utility on page 228
229
-r
conservative any
Command parameters: database Specifies an alias name for the database whose packages are to be revalidated. -l all Specifies the (optional) path and the (mandatory) file name to be used for recording errors that result from the package revalidation procedure. Specifies that rebinding of all valid and invalid packages is to be done. If this option is not specified, all packages in the database are examined, but only those packages that are marked as invalid are rebound, so that they are not rebound implicitly during application execution. User ID. This parameter must be specified if a password is specified. Password. This parameter must be specified if a user ID is specified. Resolve. Specifies whether rebinding of the package is to be performed with or without conservative binding semantics. This affects whether new functions and data types are considered during function resolution and type resolution on static DML statements in the package. This option is not supported by DRDA. Valid values are: conservative Only functions and types in the SQL path that were defined before the last explicit bind time stamp are considered for function and type resolution. Conservative binding semantics are used. This is the default. This option is not supported for an inoperative package. any Any of the functions and types in the SQL path are considered for function and type resolution. Conservative binding semantics are not used.
-u -p -r
Usage notes:
230
Command Reference
231
-db database name -check -reportdir report directory -selective -server dlfm server
Where prefix list is one or more DLFS prefixes delimited by a colon character, for instance prefix1:prefix2:prefix3. Command parameters: -db database name The name of the database containing the tables with DATALINK columns that need to be reconciled. This parameter is required. -check List the tables that might need reconciliation. If you use this parameter, no reconcile operations will be performed. This parameter is required when the -reportdir parameter is not specified. -reportdir Specifies the directory where the utility is to place a report for each of the reconcile operations. For each table on which the reconcile is performed, files of the format <tbschema>.<tbname>.<ext> will be created where v <tbschema> is the schema of the table;
232
Command Reference
The path in a DATALINK column value is considered to match the prefix list if any of the prefixes in the list are a left-most substring of the path. If this parameter is not used, all prefixdes for all Data Links servers that are registered with the specified DB2 database will be reconciled. -server The name of the Data Links server for which the reconcile operation is to be performed. The parameter dlfm server represents an IP hostname. This hostname must exactly match the DLFM server hostname registered with the given DB2 database. Examples:
db2_recon_aid -db STAFF -check db2_recon_aid -db STAFF -reportdir /home/smith db2_recon_aid -db STAFF -check -selective -server dlmserver.services.com -prefixes /dlfsdir1/smith/ db2_recon_aid -db STAFF -reportdir /home/smith -selective -server dlmserver.services.com -prefixes /dlfsdir1/smith/:/dlfsdir2/smith/
Usage notes: 1. On AIX systems or Solaris Operating Environments, the db2_recon_aid utility is located in the INSTHOME/sqllib/adm directory, where INSTHOME is the home directory of the instance owner. 2. On Windows systems, the utility is located in x:\sqllib\bin directory where x: is the drive where you installed DB2 Data Links Manager. 3. db2_recon_aid can identify all tables in a given database which contain DATALINK columns with the FILE LINK CONTROL column attribute. It is these types of columns which might require file reference validation via the RECONCILE utility. By specifying the -check option, the tables of interest can
Chapter 1. System Commands
233
234
Command Reference
Command parameters: -f configFilename Specifies the name of the file containing the configuration information necessary for relocating the database. This can be a relative or absolute file name. The format of the configuration file is:
DB_NAME=oldName,newName DB_PATH=oldPath,newPath INSTANCE=oldInst,newInst NODENUM=nodeNumber LOG_DIR=oldDirPath,newDirPath CONT_PATH=oldContPath1,newContPath1 CONT_PATH=oldContPath2,newContPath2 ... STORAGE_PATH=oldStoragePath1,newStoragePath1 STORAGE_PATH=oldStoragePath2,newStoragePath2 ...
Where: DB_NAME Specifies the name of the database being relocated. If the database name is being changed, both the old name and the new name must be specified. This is a required field. DB_PATH Specifies the original path of the database being relocated. If the database path is changing, both the old path and new path must be specified. This is a required field. INSTANCE Specifies the instance where the database exists. If the database is being moved to a new instance, both the old instance and new instance must be specified. This is a required field. NODENUM Specifies the node number for the database node being changed. The default is 0. LOG_DIR Specifies a change in the location of the log path. If the log path is being changed, both the old path and new path must be specified. This specification is optional if the log path resides under the database path, in which case the path is updated automatically.
Chapter 1. System Commands
235
236
Command Reference
Save the configuration file as relocate.cfg and use the following command to make the changes to the database files:
db2relocatedb -f relocate.cfg
Example 2 To move the database DATAB1 from the instance jsmith on the path /dbpath to the instance prodinst do the following: 1. Move the files in the directory /dbpath/jsmith to /dbpath/prodinst. 2. Use the following configuration file with the db2relocatedb command to make the changes to the database files:
DB_NAME=DATAB1 DB_PATH=/dbpath INSTANCE=jsmith,prodinst NODENUM=0
Example 3 The database PRODDB exists in the instance inst1 on the path /databases/PRODDB. The location of two table space containers needs to be changed as follows: v SMS container /data/SMS1 needs to be moved to /DATA/NewSMS1. v DMS container /data/DMS1 needs to be moved to /DATA/DMS1. After the physical directories and files have been moved to the new locations, the following configuration file can be used with the db2relocatedb command to make changes to the database files so that they recognize the new locations:
DB_NAME=PRODDB DB_PATH=/databases/PRODDB INSTANCE=inst1 NODENUM=0 CONT_PATH=/data/SMS1,/DATA/NewSMS1 CONT_PATH=/data/DMS1,/DATA/DMS1
Example 4 The database TESTDB exists in the instance db2inst1 and was created on the path /databases/TESTDB. Table spaces were then created with the following containers:
TS1 TS2_Cont0 TS2_Cont1 /databases/TESTDB/TS3_Cont0 /databases/TESTDB/TS4/Cont0 /Data/TS5_Cont0 /dev/rTS5_Cont1
TESTDB is to be moved to a new system. The instance on the new system will be newinst and the location of the database will be /DB2. When moving the database, all of the files that exist in the /databases/TESTDB/ db2inst1 directory must be moved to the /DB2/newinst directory. This means that the first 5 containers will be relocated as part of this move. (The first 3 are relative
Chapter 1. System Commands
237
Example 5 The database TESTDB has two database partitions on database partition servers 10 and 20. The instance is servinst and the database path is /home/servinst on both database partition servers. The name of the database is being changed to SERVDB and the database path is being changed to /databases on both database partition servers. In addition, the log directory is being changed on database partition server 20 from /testdb_logdir to /servdb_logdir. Since changes are being made to both database partitions, a configuration file must be created for each database partition and db2relocatedb must be run on each database partition server with the corresponding configuration file. On database partition server 10, the following configuration file will be used:
DB_NAME=TESTDB,SERVDB DB_PATH=/home/servinst,/databases INSTANCE=servinst NODE_NUM=10
On database partition server 20, the following configuration file will be used:
DB_NAME=TESTDB,SERVDB DB_PATH=/home/servinst,/databases INSTANCE=servinst NODE_NUM=20 LOG_DIR=/testdb_logdir,/servdb_logdir
Example 6 The database MAINDB exists in the instance maininst on the path /home/maininst. The location of four table space containers needs to be changed as follows:
/maininst_files/allconts/C0 /maininst_files/allconts/C1 /maininst_files/allconts/C2 /maininst_files/allconts/C3 needs needs needs needs to to to to be be be be moved moved moved moved to to to to /MAINDB/C0 /MAINDB/C1 /MAINDB/C2 /MAINDB/C3
After the physical directories and files are moved to the new locations, the following configuration file can be used with the db2relocatedb command to make changes to the database files so that they recognize the new locations.
238
Command Reference
239
Command parameters: database_alias Specifies the name of the database to be placed in rollforward pending state. If you are using high availability disaster recovery (HADR), the database is reset to a standard database. -log logfile_path Specifies the log file path. Related concepts: v High availability disaster recovery overview in Data Recovery and High Availability Guide and Reference
240
Command Reference
db2rspgn
Command parameters: -d -i Destination directory for a response file and any instance files. This parameter is required. A list of instances for which you want to create a profile. The default is to generate an instance profile file for all instances. This parameter is optional.
-noctlsrv Indicates that an instance profile file will not be generated for the Control Server instance. This parameter is optional. -nodlfm Indicates that an instance profile file will not be generated for the Data Links File Manager instance. This parameter is optional. Related concepts: v The response file generator (Windows) in Installation and Configuration Supplement Related tasks: v Response file installation of DB2 overview (Windows) in Installation and Configuration Supplement
241
-xml -force
Command parameters: -dbpath path-name Specifies the path on which to create the database. On Windows operating systems, specifies the letter of the drive on which to create the database. The maximum length for path-name is 175 characters. By default, path-name is the default path specified in the database manager configuration file (dftdbpath parameter). -name database-name Specifies a name for the sample database. The database name must adhere to the naming conventions for databases. By default, database-name is SAMPLE. -schema schema-name Specifies the default schema in which to create the database objects. The names of all database objects will be qualified with the schema name. The schema name must adhere to the naming conventions for schemas. By default, schema-name is the value of the CURRENT_SCHEMA special register which corresponds to the current users authorization ID. -sql Creates tables, triggers, functions, procedures, and populates the tables with data.
242
Command Reference
-force Forces the drop and recreation of any existing database in the instance with the same name as specified for the sample database. -verbose Prints status messages to standard output. -quiet Suppresses the printing of status messages to standard output. -? Returns the db2sampl command syntax help.
Default behaviour of db2sampl When the db2sampl command is issued without any optional arguments, depending on whether the environment is partioned or not, it behaves differently: In non-partitioned database environments: v Creates a database named SAMPLE with a Unicode (UTF-8) codeset and a UCA400_NO collation and a C (Canadian) territory in the default database path. v Creates relational database objects including tables, indexes, constraints, triggers, functions, procedures, multi-dimensional clustered tables and materialized query tables. v Populates relational tables with data. v Creates tables with XML data type columns. v Creates indexes over XML data. v Creates an XML schema repository that contains XML schema documents. v All database object names are qualified with the value of the CURRENT_SCHEMA special register. In partitioned database environments: v Creates a database named SAMPLE with the default codeset and collation derived from the operating system environment. v Creates relational database objects including tables, indexes, constraints, triggers, functions, procedures, multi-dimensional clustered tables and materialized query tables. v Populates tables with data. v All database object names are qualified with the value of the CURRENT_SCHEMA special register. Usage notes:
Chapter 1. System Commands
243
v On Windows operating systems, to create a sample database named mysample on the E: drive containing only SQL database objects in schema myschema and to view status messages, issue:
db2sampl -dbpath E -name mysample -schema myschema -sql -force -verbose
Related tasks: v Creating the sample database in Samples Topics Related reference: v The SAMPLE database in Samples Topics v GET DATABASE MANAGER CONFIGURATION on page 463 v Appendix B, Naming conventions, on page 797 v CREATE DATABASE on page 395
244
Command Reference
-all
-null
-r instance db-partition-number
-l -lr
-v
-ul -ur
-h -?
Command parameters: variable= value Sets a specified variable to a specified value. To delete a variable, do not specify a value for the specified variable. Changes to settings take effect after the instance has been restarted. -g -i Accesses the global profile registry variables for all instances pertaining to a particular DB2 copy. Specifies the instance profile to use instead of the current, or default.
db-partition-number Specifies a number listed in the db2nodes.cfg file. -gl -all Accesses the global profile variables stored in LDAP. This option is only effective if the registry variable DB2_ENABLE_LDAP has been set to YES. Displays all occurrences of the local environment variables as defined in: v The environment, denoted by [e] v The node level registry, denoted by [n] v The instance level registry, denoted by [i] v The global level registry, denoted by [g].
Chapter 1. System Commands
245
-r instance Resets the profile registry for the given instance. If no instance is specified, and an instance attachment exists, resets the profile for the current instance. If no instance is specified, and no attachment exists, resets the profile for the instance specified by the DB2INSTANCE environment variable. -n DAS node Specifies the remote DB2 administration server node name. -u user Specifies the user ID to use for the administration server attachment. -p password Specifies the password to use for the administration server attachment. -l -lr -v -ul -ur -h/-? Lists all instance profiles for the current DB2 product installation. Lists all supported registry variables. Specifies verbose mode. Accesses the user profile variables. This parameter is supported on Windows operating systems only. Refreshes the user profile variables. This parameter is supported on Windows operating systems only. Displays help information. When this option is specified, all other options are ignored, and only the help information is displayed.
Examples: v Display all defined profiles (DB2 instances) pertaining to a particular installation :
db2set -l
v Display all defined global variables which are visible by all instances pertaining to a particular installation:
db2set -g
v Display all defined values for DB2COMM for the current instance:
db2set -all DB2COMM
v Unset the variable DB2CHKPTR on the remote instance RMTINST through the DAS node RMTDAS using user ID MYID and password MYPASSWD:
db2set -i RMTINST -n RMTDAS -u MYID -p MYPASSWD DB2CHKPTR=
v Set the variable DB2COMM to be TCPIP globally for all instances pertaining to a particular installation:
246
Command Reference
Usage notes: If no variable name is specified, the values of all defined variables are displayed. If a variable name is specified, only the value of that variable is displayed. To display all the defined values of a variable, specify variable -all. To display all the defined variables in all registries, specify -all. To modify the value of a variable, specify variable=, followed by its new value. To set the value of a variable to NULL, specify variable -null. Changes to settings take effect after the instance has been restarted. To delete a variable, specify variable=, followed by no value. Related reference: v REG_VARIABLES administrative view Retrieve DB2 registry settings in use in Administrative SQL Routines and Views
247
-r response_file
-? -h
Command parameters: -i language Two-letter language code of the language in which to perform the installation. -l log_file Full path and file name of the log file to use. -t trace_file Generates a file with install trace information. -r response_file Full path and file name of the response file to use. -?, -h Generates usage information.
Usage Notes: You must log on as root or use su with the - flag to set the process environment as if you had logged in as root. If the process environment is not set as root, the installation process finishes without errors but you will encounter errors when you run the DB2 copy. Related reference: v setup - Install DB2 on page 305 v Language identifiers for running the DB2 Setup wizard in another language in Quick Beginnings for DB2 Servers
248
Command Reference
-r outfile ,outfile2 -c
on off
-i
-o options -v
off on
-s
off on
-h
Command parameters: -d dbname An alias name for the database against which SQL statements are to be applied. The default is the value of the DB2DBDFT environment variable. -f file_name Name of an input file containing SQL statements. The default is standard input. Identify comment text with two hyphens at the start of each line, that is, -<comment>. If it is to be included in the output, mark the comment as follows: --#COMMENT <comment>. A block is a number of SQL statements that are treated as one, that is, information is collected for all of those statements at once, instead of one at a time. Identify the beginning of a block of queries as follows: --#BGBLK. Identify the end of a block of queries as follows: --#EOBLK. Specify one or more control options as follows: --#SET <control option> <value>. Valid control options are: ROWS_FETCH Number of rows to be fetched from the answer set. Valid values are -1 to n. The default value is -1 (all rows are to be fetched). ROWS_OUT Number of fetched rows to be sent to output. Valid values are -1 to n. The default value is -1 (all fetched rows are to be sent to output).
Chapter 1. System Commands
249
complete The time to prepare, execute, and fetch, expressed separately. -o options Control options. Valid options are: f rows_fetch Number of rows to be fetched from the answer set. Valid values are -1 to n. The default value is -1 (all rows are to be fetched). r rows_out Number of fetched rows to be sent to output. Valid values are -1 to n. The default value is -1 (all fetched rows are to be sent to output). -v -s Verbose. Send information to standard error during query processing. The default value is off. Summary Table. Provide a summary of elapsed times and CPU times, containing both the arithmetic and the geometric means of all collected values. Display help information. When this option is specified, all other options are ignored, and only the help information is displayed.
-h
Usage notes: The following can be executed from the db2sql92 command prompt: v All control options v SQL statements v CONNECT statements v commit work v help v quit
250
Command Reference
SQL statements can be up to 65 535 characters in length. Statements must be terminated by a semicolon. SQL statements are executed with the repeatable read (RR) isolation level. When running queries, there is no support for the results set to include LOBs. Related reference: v db2batch - Benchmark tool on page 34
251
252
Command Reference
-tracelevel TRACE_SQLJ -tracefile file-name , -tracelevel TRACE_NONE TRACE_CONNECTION_CALLS TRACE_STATEMENT_CALLS TRACE_RESULT_SET_CALLS TRACE_DRIVER_CONFIGURATION TRACE_CONNECTS TRACE_DRDA_FLOWS TRACE_RESULT_SET_META_DATA TRACE_PARAMETER_META_DATA TRACE_DIAGNOSTICS TRACE_SQLJ TRACE_XA_CALLS TRACE_ALL
serialized-profile-name
options-string:
DB2-for-z/OS-options DB2-Database-for-Linux-UNIX-and-Windows-options
253
OWNER(authorization-ID) PATH(
, schema-name USER )
QUALIFIER(qualifier-name)
RELEASE(COMMIT) RELEASE(DEALLOCATE)
SQLERROR(NOPACKAGE) SQLERROR(CONTINUE)
VALIDATE(RUN) VALIDATE(BIND)
254
Command Reference
OWNER authorization-ID
QUALIFIER qualifier-name
QUERYOPT optimization-level
Notes: 1 These options can be specified in any order. Command parameters: -help Specifies that db2sqljbind describes each of the options that it supports. If any other options are specified with -help, they are ignored. -url Specifies the URL for the data source for which the profile is to be customized. This URL is used if the -automaticbind or -onlinecheck option is YES. The variable parts of the -url value are: server The domain name or IP address of the MVS system on which the DB2 subsystem resides. port The TCP/IP server port number that is assigned to the DB2 subsystem. The default is 446. database A name for the database server for which the profile is to be customized. If the connection is to a DB2 for z/OS server, database is the DB2 location name that is defined during installation. All characters in this value must be uppercase characters. You can determine the location name by executing the following SQL statement on the server:
SELECT CURRENT SERVER FROM SYSIBM.SYSDUMMY1;
If the connection is to a DB2 Database for Linux, UNIX, and Windows server, database is the database name that is defined during installation. If the connection is to an IBM Cloudscape server, the database is the fully-qualified name of the file that contains the database. This name must be enclosed in double quotation marks ("). For example:
"c:/databases/testdb"
255
256
Command Reference
program-name is the name of the SQLJ source program, without the extension .sqlj. n is an integer between 0 and m-1, where m is the number of serialized profiles that the SQLJ translator generated from the SQLJ source program. If you specify more than one serialized profile name to bind a single DB2 package from several serialized profiles, you must have specified the same serialized profile names, in the same order, when you ran db2sqljcustomize. Examples:
db2sqljbind -user richler -password mordecai -url jdbc:db2://server:50000/sample -bindoptions "EXPLAIN YES" pgmname_SJProfile0.ser
Usage notes: Package names produced by db2sqljbind: The names of the packages that are created by db2sqljbind are the names that you specified using the-rootpkgname or -singlepkgname parameter when you ran db2sqljcustomize. If you did not specify -rootpkgname or -singlepkgname, the package names are the first seven bytes of the profile name, appended with the isolation level character. DYNAMICRULES value for db2sqljbind: The DYNAMICRULES bind option determines a number of run-time attributes for a DB2 package. Two of those attributes are the authorization ID that is used to check authorization, and the qualifier that is used for unqualified objects. To ensure the correct authorization for dynamically executed positioned UPDATE and DELETE statements in SQLJ programs, db2sqljbind always binds the DB2 packages with the DYNAMICRULES(BIND) option. You cannot modify this option. The DYNAMICRULES(BIND) option causes the SET CURRENT SQLID statement and the SET CURRENT SCHEMA statement to have no impact on an SQLJ program, because those statements affect only dynamic statements that are bound with DYNAMICRULES values other than BIND. With DYNAMICRULES(BIND), unqualified table, view, index, and alias names in dynamic SQL statements are implicitly qualified with value of the bind option QUALIFIER. If you do not specify QUALIFIER, DB2 uses the authorization ID of the package owner as the implicit qualifier. If this behavior is not suitable for your program, you can use one of the following techniques to set the correct qualifier: v Force positioned UDPATE and DELETE statements to execute statically. You can use the -staticpositioned YES option of db2sqljcustomize or db2sqljbind to do this if the cursor (iterator) for a positioned UPDATE or DELETE statement is in the same package as the positioned UPDATE or DELETE statement. v Fully qualify DB2 table names in positioned UPDATE and positioned DELETE statements. Related reference: v BIND on page 355 v db2sqljcustomize - SQLJ profile customizer on page 259 v db2sqljprint - SQLJ profile printer on page 270
Chapter 1. System Commands
257
258
Command Reference
-automaticbind YES -user user-ID -password password -automaticbind NO -pkgversion AUTO -pkgversion version-id
-bindoptions "
options-string "
-storebindoptions
-collection collection-name
-onlinecheck YES -onlinecheck NO -qualifier qualifier-name -rootpkgname package-name-stem -singlepkgname package-name -longpkgname
259
serialized-profile-name file-name.grp
options-string:
DB2-for-z/OS-options DB2-Database-for-Linux-UNIX-and-Windows-options
260
Command Reference
OWNER(authorization-ID) PATH(
, schema-name USER )
QUALIFIER(qualifier-name)
RELEASE(COMMIT) RELEASE(DEALLOCATE)
SQLERROR(NOPACKAGE) SQLERROR(CONTINUE)
VALIDATE(RUN) VALIDATE(BIND)
261
OWNER authorization-ID
QUALIFIER qualifier-name
QUERYOPT optimization-level
Notes: 1 These options can be specified in any order. Command parameters: -help Specifies that the SQLJ customizer describes each of the options that the customizer supports. If any other options are specified with -help, they are ignored. -url Specifies the URL for the data source for which the profile is to be customized. A connection is established to the data source that this URL represents if the -automaticbind or -onlinecheck option is specified as YES or defaults to YES. The variable parts of the -url value are: server The domain name or IP address of the MVS system on which the DB2 subsystem resides. port The TCP/IP server port number that is assigned to the DB2 subsystem. The default is 446. database A name for the database server for which the profile is to be customized. If the connection is to a DB2 for z/OS server, database is the DB2 location name that is defined during installation. All characters in this value must be uppercase characters. You can determine the location name by executing the following SQL statement on the server:
SELECT CURRENT SERVER FROM SYSIBM.SYSDUMMY1;
If the connection is to a DB2 Database for Linux, UNIX, and Windows server, database is the database name that is defined during installation. If the connection is to an IBM Cloudscape server, the database is the fully-qualified name of the file that contains the database. This name must be enclosed in double quotation marks ("). For example:
262
Command Reference
property=value; A property for the JDBC connection. For the definitions of these properties, see Properties for the IBM DB2 Driver for JDBC and SQLJ. -datasource JNDI-name Specifies the logical name of a DataSource object that was registered with JNDI. The DataSource object represents the data source for which the profile is to be customized. A connection is established to the data source if the -automaticbind or -onlinecheck option is specified as YES or defaults to YES. Specifying -datasource is an alternative to specifying -url. The DataSource object must represent a connection that uses IBM DB2 Driver for JDBC and SQLJ type 4 connectivity. -user user-ID Specifies the user ID to be used to connect to the data source for online checking or binding a package. You must specify -user if you specify -url. You must specify -user if you specify -datasource, and the DataSource object that JNDI-name represents does not contain a user ID. -password password Specifies the password to be used to connect to the data source for online checking or binding a package. You must specify -password if you specify -url. You must specify -password if you specify -datasource, and the DataSource object that JNDI-name represents does not contain a password. -automaticbind YES|NO Specifies whether the customizer binds DB2 packages at the data source that is specified by the -url parameter. The default is YES. The number of packages and the isolation levels of those packages are controlled by the -rootpkgname and -singlepkgname options. Before the bind operation can work, the following conditions need to be met: v TCP/IP and DRDA must be installed at the target data source. v Valid -url, -username, and -password values must be specified. v The -username value must have authorization to bind a package at the target data source. -pkgversion AUTO|version-id Specifies the package version that is to be used when packages are bound at the server for the serialized profile that is being customized. db2sqljcustomize stores the version ID in the serialized profile and in the DB2 package. Run-time version verification is based on the consistency token, not the version name. To automatically generate a version name that is based on the consistency token, specify -pkgversion AUTO. The default is that there is no version. -bindoptions options-string Specifies a list of options, separated by spaces. These options have the same function as DB2 precompile and bind options with the same names. If you are preparing your program to run on a DB2 for z/OS system, specify DB2 for z/OS options. If you are preparing your program to run on a DB2 Database for Linux, UNIX, and Windows system, specify DB2 Database for Linux, UNIX, and Windows options. Notes on bind options:
Chapter 1. System Commands
263
If -longpkgname is not specified, package-name-stem must be an alphanumeric string of seven or fewer bytes.
264
Command Reference
Table 1 shows the parts of a generated package name and the number of bytes for each part. The maximum length of a package name is maxlen. maxlen is 8 if -longpkgname is not specified. maxlen is 128 if -longpkgname is specified.
Table 1. Parts of a package name that is generated by db2sqljcustomize Package name part Bytes-from-program-name IDNumber PkgIsolation Number of bytes m=min(Length(program-name), maxlen1Length(IDNumber)) Length(IDNumber) 1 Value First m bytes of program-name, in uppercase IDNumber 1, 2, 3, or 4. This value represents the transaction isolation level for the package. See Table 2.
Table 2 shows the values of the PkgIsolation portion of a package name that is generated by db2sqljcustomize.
Table 2. PkgIsolation values and associated isolation levels PkgNumber value 1 2 3 4 Isolation level for package Uncommitted read (UR) Cursor stability (CS) Read stability (RS) Repeatable read (RR)
Example: Suppose that a profile name is ThisIsMyProg_SJProfile111.ser. The db2sqljcustomize option -longpkgname is not specified. Therefore, Bytes-from-program-name is the first four bytes of ThisIsMyProg, translated to uppercase, or THIS. IDNumber is 111. The four package names are:
Chapter 1. System Commands
265
Example: Suppose that a profile name is ThisIsMyProg_SJProfile111.ser. The db2sqljcustomize option -longpkgname is specified. Therefore, Bytes-from-program-name is ThisIsMyProg, translated to uppercase, or THISISMYPROG. IDNumber is 111. The four package names are:
THISISMYPROG1111 THISISMYPROG1112 THISISMYPROG1113 THISISMYPROG1114
Example: Suppose that a profile name is A_SJProfile0.ser. Bytes-from-programname is A. IDNumber is 0. Therefore, the four package names are:
A01 A02 A03 A04
Letting db2sqljcustomize generate package names is not recommended. If any generated package names are the same as the names of existing packages, db2sqljcustomize overwrites the existing packages. To ensure uniqueness of package names, specify -rootpkgname. -longpkgname Specifies that the names of the DB2 packages that db2sqljcustomize generates can be up to 128 bytes. Use this option only if you are binding packages at a server that supports long package names. If you specify -singlepkgname or -rootpkgname, you must also specify -longpkgname under the following conditions: v The argument of -singlepkgname is longer than eight bytes. v The argument of -rootpkgname is longer than seven bytes. -staticpositioned NO|YES For iterators that are declared in the same source file as positioned UPDATE statements that use the iterators, specifies whether the positioned UPDATEs are executed as statically bound statements. The default is NO. NO means that the positioned UPDATEs are executed as dynamically prepared statements. -tracefile file-name Enables tracing and identifies the output file for trace information. This option should be specified only under the direction of IBM Software Support. -tracelevel If -tracefile is specified, indicates what to trace while db2sqljcustomize runs. The default is TRACE_SQLJ. This option should be specified only under the direction of IBM Software Support. serialized-profile-name|file-name.grp Specifies the names of one or more serialized profiles that are to be customized. A serialized profile name is of the following form:
program-name_SJProfileIDNumber.ser
You can specify the serialized profile name with or without the .ser extension. program-name is the name of the SQLJ source program, without the extension .sqlj. n is an integer between 0 and m-1, where m is the number of serialized profiles that the SQLJ translator generated from the SQLJ source program.
266
Command Reference
Usage notes: Online checking is always recommended: It is highly recommended that you use online checking when you customize your serialized profiles. Online checking determines information about the data types and lengths of DB2 host variables, and is especially important for the following items: v Predicates with java.lang.String host variables and CHAR columns Unlike character variables in other host languages, Java String host variables are not declared with a length attribute. To optimize a query properly that contains character host variables, DB2 needs the length of the host variables. For example, suppose that a query has a predicate in which a String host variable is compared to a CHAR column, and an index is defined on the CHAR column. If DB2 cannot determine the length of the host variable, it might do a table space scan instead of an index scan. Online checking avoids this problem by providing the lengths of the corresponding character columns. v Predicates with java.lang.String host variables and GRAPHIC columns Without online checking, DB2 might issue a bind error (SQLCODE -134) when it encounters a predicate in which a String host variable is compared to a GRAPHIC column. v Column names in the result table of an SQLJ SELECT statement at a remote server: Without online checking, the driver cannot determine the column names for the result table of a remote SELECT. Customizing multiple serialized profiles together: Multiple serialized profiles can be customized together to create a single DB2 package. If you do this, and if you
Chapter 1. System Commands
267
268
Command Reference
269
Command parameters: profilename Specifies the relative or absolute name of an SQLJ profile file. When an SQLJ file is translated into a Java source file, information about the SQL operations it contains is stored in SQLJ-generated resource files called profiles. Profiles are identified by the suffix _SJProfileN (where N is an integer) following the name of the original input file. They have a .ser extension. Profile names can be specified with or without the .ser extension. Examples:
db2sqljprint pgmname_SJProfile0.ser
Related reference: v db2sqljcustomize - SQLJ profile customizer on page 259 v db2sqljbind - SQLJ profile binder on page 252
270
Command Reference
271
272
Command Reference
-co
-f
-g
-h
-l
-m
-n
-q
-ro
-s
-v
-x
Notes: 1. There is no separate option for invoking this tool in optimizer mode. 2. The db2support tool collects bad query-related information only if -st, -sf, or -se options are specified. In case there is an error or trap during optimization, -cl 0 (collect level zero) should be used to collect all catalog tables and db2look table definitions without trying to explain a bad query. One of the four options mentioned here must be specified to work with optimizer problems.
Chapter 1. System Commands
273
-co
Collect catalogs for all tables in the database. The default is to only collect catalog information for the tables used in a query that has a problem.
-cs or -curschema Specifies the value of the current schema to use to qualify any unqualified table names in the statement. The default value is the authorization ID of the current session user. -d database_name or -database database_name Specifies the name of the database for which data is being collected. -f or -flow Ignores pauses when requests are made for the user to Press <Enter> key to continue. This option is useful when running or calling the db2support tool via a script or some other automated procedure where unattended execution is desired. -fp or -funcpath Specifies value of the function path special register to be used to resolve unqualified user defined functions and types. The default value is SYSIBM, SYSFUN, SYSPROC, X, where X is the value of the USER special register, delimited by double quotation marks. -g or -get_dump Specifies that all files in a dump directory, excluding core files, are to be captured. -h or -help Displays help information. When this option is specified, all other options are ignored, and only the help information is displayed.
274
Command Reference
275
The db2support tool stores the query in the optimizer directory by copying the query into the file called bad_query.sql. v As an SQL statement stored in a file.
db2support <output_directory> -d <database name> -sf <sql_file>
The file containing the query is copied by the tool into the optimizer directory. v As a file containing an embedded static SQL statement with the query having the problem.
db2support <output_directory> -d <database name> -se <embedded_sql_file>
The file containing the query is copied by the tool into the optimizer directory. The file does not need to be in the current directory but should be readable by an invoking userID. v While returning different levels of performance information.
db2support <output_directory> -d <database name> -collect 0
The db2support tool collects different levels of performance information based on the level of detail requested. The values 0 to 3 collect increasing amounts of detail. Catalog information and table definitions to enable you to reproduce the database objects for a production database are collected when a level of 0 is used. To collect information to diagnose a slow query using optimizer-related special registers that were set by default, use:
db2support . -d sample -st "SELECT * FROM EMPLOYEE"
This example returns all the data to the db2support.zip file. Diagnostic files are created in the current directory and its subdirectories (since . is specified as the output path). The system information and diagnostic files are collected as well. To collect the same information shown in the previous example but with the user-specified values for the optimizer-related special registers, use:
db2support . -d sample -st "SELECT * FROM EMPLOYEE" -cs db2usr -cd 3 -ol 5 -ra ANY -fp MYSCHEMA -op MYPROFSCHEMA.MYPROFILE -ot ALL -il CS
276
Command Reference
277
Command parameters: -l Displays a list of DB2 database product installations on the system.
-d installation-name Sets the default DB2 copy. Related reference: v db2iupdt - Update instances on page 135
278
Command Reference
Command parameters: -t Displays a graphical user interface that allows an administrator to change either the application version or synchronization credentials for a satellite.
-s application_version Sets the application version on the satellite. -g Displays the application version currently set on the satellite.
Related reference: v db2SyncSatellite API - Start satellite synchronization in Administrative API Reference v db2SyncSatelliteStop API - Pause satellite synchronization in Administrative API Reference
279
280
Command Reference
Command parameters: +auto Start db2systray automatically for the specified instance when the Windows operating system starts. db2systray can also be configured to launch automatically by enabling the Launch Tool at Startup db2systray menu option. -auto Disable db2systray from starting automatically for the specified instance when the Windows operating system starts.
instance-name Name of the DB2 instance to be monitored. If no instance name is specified, db2systray will monitor the default local DB2 instance. If no instance exists, or the specified instance is not found, db2systray will exit quietly. -clean Clean up all registry entries for all DB2 instances monitored by db2systray and stop all running db2systray.exe processes. Examples: 1. C:\SQLLIB\bin> db2systray Starts db2systray for the default DB2 instance specified by the DB2INSTANCE environment variable. 2. C:\SQLLIB\bin\> db2systray DB2INST1 Starts db2systray for the instance named DB2INST1. 3. C:\SQLLIB\bin\> db2systray +auto Starts db2systray for the default DB2 instance, and configures db2systray to start monitoring this instance automatically when the Windows operating system starts. C:\SQLLIB\bin\> db2systray +auto DB2INST1 Starts db2systray for the instance named DB2INST1, and configures db2systray to start monitoring this instance automatically when the Windows operating system starts. C:\SQLLIB\bin\> db2systray -auto Disables the auto start option for the default instance defined by the DB2INSTANCE environment variable. C:\SQLLIB\bin\> db2systray -auto DB2INST1 Disables the auto start option for instance DB2INST1. C:\SQLLIB\bin\> db2systray -clean Removes all registry entries created by db2systray and stops all running db2systray.exe processes. If db2systray.exe processes are running for other installed DB2 copies, they will not be cleaned up. You must execute db2systray -clean from the SQLLIB/bin for each DB2 copy you wish to clean up.
4.
5.
6. 7.
281
STORE store option clause DOUBLE STORE RETRIEVE retrieve option clause SHOW TAPE HEADER tape device EJECT TAPE tape device DELETE TAPE LABEL tape label QUERY for rollforward clause
USING blocksize
EJECT
TO directory
history file
Command parameters:
282
Command Reference
283
284
Command Reference
Command parameters: tablespace-state A hexadecimal table space state value. Examples: The request db2tbst 0x0000 produces the following output:
State = Normal
285
db2trc - Trace
db2trc - Trace
Controls the trace facility of a DB2 instance or the DB2 Administration Server. The trace facility records information about operations and formats this information into readable form. Enabling the trace facility might impact your systems performance. As a result, only use the trace facility when directed by a DB2 Support technical support representative. Authorization: To trace a DB2 instance on a UNIX operating systems, one of the following: v sysadm v sysctrl v sysmaint To trace the DB2 Administration Server on a UNIX operating systems: v dasadm On a Windows operating system, no authorization is required. Required connection: None Command syntax:
286
Command Reference
db2trc - Trace
db2 db2trc das on -f filename , -M add trace-mask del set -Mfile mask-filename -Mtheme theme-name , -p -l buffer_size -i buffer_size off , change add trace-mask del set -Mfile mask-filename -Mtheme theme-name dmp filename flw dump_file output_file fmt dump_file output_file clr -M pid .tid
Command parameters: db2 das on Specifies that all trace operations will be performed on the DB2 instance. This is the default. Specifies that all trace operations will be performed on the DB2 Administration Server. Use this parameter to start the trace facility. -f filename Specifies that trace information should be continuously written to the specified file, until db2trc is turned off. Using this option can generate an extremely large dump file. Use this option only when instructed by DB2 Support. -M<action> trace-mask Enables a trace mask, which limits the operations recorded by the trace facility. Trace masks are provided by DB2 Support technical support representatives as necessary. The possible values for <action> are: add del set Adds a trace mask element. Deletes a trace mask element. Sets the trace mask to a specific value. All previous contents of the mask are lost.
-Mfile mask-filename Loads a list of trace mask actions from the file mask-filename.
Chapter 1. System Commands
287
db2trc - Trace
-Mtheme theme-name Loads a trace mask theme from a theme file. By default, the theme file in ~/sqllib/cfg is used. -p pid.tid Only enables the trace facility for the specified process IDs (pid) and thread IDs (tid). The period (.) must be included if a tid is specified. A maximum of five pid.tid combinations is supported. For example, to enable tracing for processes 10, 20, and 30 the syntax is:
db2trc on -p 10,20,30
To enable tracing only for thread 33 of process 100 and thread 66 of process 200 the syntax is:
db2trc on -p 100.33,200.66
-l [ buffer_size] | -i [buffer_size] This option specifies the size and behavior of the trace buffer.-l specifies that the last trace records are retained (that is, the first records are overwritten when the buffer is full). -i specifies that the initial trace records are retained (that is, no more records are written to the buffer once it is full). The buffer size can be specified in either bytes or megabytes. To specify the buffer size in megabytes, add the character m to the buffer size. For example, to start db2trc with a 4megabyte buffer:
db2trc on -l 4m
The default and maximum trace buffer sizes vary by platform. The minimum buffer size is 1 MB. The buffer size must be a power of 2. off change Changes values associated with a trace mask on a trace that is already running. Trace masks are provided by DB2 Support technical support representatives as necessary. -Madd trace-mask Adds a trace mask element. -Mdel trace-mask Deletes a trace mask element. -Mset trace-mask Sets the trace mask to a specific value. All previous contents of the mask are lost. -Mfile mask-filename Loads a list of trace mask actions from the file mask-filename. -Mtheme theme-name Loads a trace mask theme from a theme file. dmp Dumps the trace information to a file. The following command will put the information in the current directory in a file called db2trc.dmp:
db2trc dmp db2trc.dmp
After the trace is dumped to a file, stop the trace facility by typing:
db2trc off
288
Command Reference
db2trc - Trace
Specify a file name with this parameter. The file is saved in the current directory unless the path is explicitly specified. flw | fmt After the trace is dumped to a binary file, confirm that it is taken by formatting it into a text file. Use either the flw option (to format records sorted by process or thread), or the fmt option (to format records chronologically). For either option, specify the name of the dump file and the name of the output file that will be generated. For example:
db2trc flw db2trc.dmp db2trc.flw
clr
Clears the contents of the trace buffer. This option can be used to reduce the amount of collected information. This option has no effect when tracing to a file.
Usage notes: The db2trc command must be issued several times to turn tracing on, produce a dump file, format the dump file, and turn tracing off again. The parameter list shows the order in which the parameters should be used. The default and maximum trace buffer sizes vary by platform. The minimum buffer size is 1 MB. When tracing the database server, it is recommended that the trace facility be turned on prior to starting the database manager. Related concepts: v Dumping a DB2 trace file in Troubleshooting Guide v Formatting a DB2 trace file in Troubleshooting Guide v Obtaining a DB2 trace using db2trc in Troubleshooting Guide Related reference: v db2drdat - DRDA trace on page 86 v SkipTrace CLI/ODBC configuration keyword in Call Level Interface Guide and Reference, Volume 1
289
-o filename
-h
Command parameters: -d database-name The name of the database to be queried. -u table-schema Specifies the schema (creator user ID) of the tables that are to be processed. The default action is to process tables created by all user IDs. -t table-name The name of a table that is to be processed. The default action is to process all tables. -o filename The name of a file to which output is to be written. The default action is to write output to standard output. -h Display help information. When this option is specified, all other options are ignored, and only the help information is displayed.
Usage notes: It is not necessary to use this tool unless there are indexes in the database that were created on a database running on a version of DB2 earlier than Version 5. This tool was not designed to handle certain types of names. If a specific table name or table schema is a delimited identifier containing lowercase characters, special characters, or blanks, it is preferable to request processing of all tables or schemas. The resulting output can be edited. Related reference: v Conversion of type-1 indexes in migrated databases in Migration Guide
290
Command Reference
Command parameters: -d dbname Specifies a database name whose maximum length is 8 characters. -h Displays help for the command.
-o outfile Output the revoke statements in the specified file. Length of the File name should be <= 80. -r Perform the revoke
Usage notes: At least one of the -r or -o options must be specified. Related reference: v Changes to the EXECUTE privilege on PUBLIC for migrated routines in Migration Guide
291
Command parameters: Running the db2unins command without any of the -?, -d, -p or -u parameters will result in the removal of all DB2 database products under the current installation directory. -d Displays the products that are installed in the current DB2 copy on the system. This option is only available when executed from an installed copy of a DB2 database product. Performs a brute force uninstallation of all DB2 database products on the system. The db2unins -f command can be issued from either the installation media or an install copy on your machine. Your system will reboot when you successfully issue db2unins -f. It can only be issued if there are no DB2 products prior to version 9 installed on the system.
-f
-p products Specifies the products that should be uninstalled where products is a semicolon separated list of the abbreviations for DB2 database products enclosed in double quotes. For example, -p ESE;PE;QP. This option is only available when executed from an installed copy of a DB2 database product. -u response-file Performs an uninstallation based on what is specified in response-file. This option is also used to perform a silent uninstallation and is only available when executed from an installed copy of a DB2 database product. -y -l -t -? Ensures that no confirmation is done during the uninstallation process. Specifies the location of the log file. Turns on the trace functionality. The trace file will be used for debugging problems with the db2unins command. Displays help for the db2unins
Related tasks:
292
Command Reference
293
Command parameters: -f filename Specifies the fully qualified name of the table space container from which the DB2 tag is to be removed. Usage notes: An SQLCODE -294 (Container in Use error) is sometimes returned from create database or from create or alter table space operations, usually indicating a specification error on the operating system resource name when the container is already in use by another table space. A container can be used by only one table space at a time. A system or database administrator who finds that the database which last used the container has been deleted, can use the db2untag tool if the containers tag was not removed. If the container is to be released, do one of the following: v For SMS containers, remove the directory and its contents using the appropriate delete commands. v For DMS raw containers, either delete the file or device, or let db2untag remove the container tag. The tool will leave such a DMS container otherwise unmodified. Related reference: v CREATE DATABASE on page 395
294
Command Reference
Command parameters: /p path A semicolon (;) separated path that points to the location or locations where the binary files and PDB files are located. /v /n infile outfile Displays version information. Formats data without regard to line number information. Specifies the input file. Specifies the output file.
Examples: If a trap file called DB30882416.TRP had been produced in your DIAGPATH, you could format it as follows:
db2xprt DB30882416.TRP DB30882416.FMT
Related concepts: v Trap files in Troubleshooting Guide Related tasks: v Formatting trap files (Windows) in Troubleshooting Guide
295
Command parameters: -a Removes the Information Center from its current location.
-l log-file Specifies the log file. The default log file is /tmp/doce_deinstall.log$$, where $$ is the process ID. -t trace-file Turns on the debug mode. The debug information is written to the file name specified as trace-file. -h/-? Displays usage information.
Related concepts: v DB2 Information Center installation options in Quick Beginnings for DB2 Servers Related tasks: v Removing DB2 products using the db2_deinstall or doce_deinstall command (Linux and UNIX) in Quick Beginnings for DB2 Servers v Uninstalling your DB2 product (Linux and UNIX) in Quick Beginnings for DB2 Servers Related reference: v db2_deinstall - Uninstall DB2 products or features on page 10 v doce_install - Install DB2 Information Center on page 297
296
Command Reference
-c image-location
-n
-L language
-l log-file
-t trace-file
-h -?
Command parameters: -b install-path Specifies the path where the DB2 Information Center is to be installed. install-path must be a full path name and its maximum length is limited to 128 characters. The default installation path is /opt/ibm/db2ic/V9. This parameter is mandatory when the -n parameter is specified. -p productID Specifies the productID of the DB2 Information Center. productID does not require DB2 as a prefix. This parameter is mandatory when the -n parameter is specified. -c image-location Specifies the product image location. To indicate multiple image locations, specify this parameter multiple times. For example, -c CD1 -c CD2. This parameter is only mandatory if the -n parameter is specified, your install requires more than one CD, and your images are not set up for automatic discovery. Otherwise, you are prompted for the location of the next CD at the time it is needed. For details on automatic discovery associated with multiple installation images, see Multiple CD installation (Linux and UNIX). -n Specifies non-interactive mode.
-L language Specifies national language support. The default is English To install multiple languages at the same time, this parameter can be specified multiple times. For example, to install both English and German, specify -L EN -L DE.
297
Examples: v To install from an image in /mnt/cdrom, and to be prompted for all needed input, issue:
cd /mnt/cdrom ./doce_install
To install DB2 Information Center to /db2/v9.1, from an image in /mnt/cdrom, non-interactively in English, issue:
cd /mnt/cdrom ./doce_install -p doce -b /db2/v9.1 -n
Related concepts: v Multiple CD installation (Linux and UNIX) in Quick Beginnings for DB2 Servers Related tasks: v Installing the DB2 Information Center using the DB2 Setup wizard (Linux) in Quick Beginnings for DB2 Servers Related reference: v db2_install - Install DB2 product on page 11 v doce_deinstall - Uninstall DB2 Information Center on page 296
298
Command Reference
disable_MQFunctions
disable_MQFunctions
Purpose Disables the use of DB2 WebSphere MQ functions for the specified database. Authorization One of the following: v sysadm v dbadm v IMPLICIT_SCHEMA on the database, if the implicit or explicit schema name of the function does not exist v CREATEIN privilege on the schema, if the schema name, DB2MQ or DB2MQ1C exists Format
disable_MQFunctions -n database -u userid -p password
Parameters n database Specifies the name of the database. u userid Specifies the user ID used to connect to the database. p password Specifies the password for the user ID. v Optional. This is used for transactional and non-transactional user-defined function support. The values can be either all, 0pc, or 1pc. When you specify 0pc, the disablement deletes from schema db2mq. If you specify 1pc, then the disablement deletes from schema db2mq1c. If you specify all, then the disablement deletes from both schemas (db2mq and db2mq1c). If you do not specify this option, the disablement defaults to the all option.
Example In the following example, DB2MQ and DB2MQ1C functions are disabled for the database SAMPLE.
disable_MQFunctions -n sample -u user1 -p password1
Related concepts: v How to use WebSphere MQ functions within DB2 in Application Development Guide for Federated Systems
299
disable_MQFunctions
Related reference: v db2mqlsn - MQ listener on page 165 v enable_MQFunctions on page 301 v MQPUBLISH scalar function in Administrative SQL Routines and Views v MQREAD scalar function in Administrative SQL Routines and Views v MQREADALL table function in Administrative SQL Routines and Views v MQREADALLCLOB table function in Administrative SQL Routines and Views v v v v v v v v MQREADCLOB scalar function in Administrative SQL Routines and Views MQRECEIVE scalar function in Administrative SQL Routines and Views MQRECEIVEALL table function in Administrative SQL Routines and Views MQRECEIVEALLCLOB table function in Administrative SQL Routines and Views MQRECEIVECLOB scalar function in Administrative SQL Routines and Views MQSEND scalar function in Administrative SQL Routines and Views MQSUBSCRIBE scalar function in Administrative SQL Routines and Views MQUNSUBSCRIBE scalar function in Administrative SQL Routines and Views
300
Command Reference
enable_MQFunctions
enable_MQFunctions
Enables DB2 WebSphere MQ functions for the specified database and validates that the DB2 WebSphere MQ functions can be executed properly. The command fails if WebSphere MQ and WebSphere MQ AMI have not been installed and configured. Authorization: One of the following: v sysadm v dbadm v IMPLICIT_SCHEMA on the database, if the implicit or explicit schema name of the function does not exist v CREATEIN privilege on the schema, if the schema name, DB2MQ or DB2MQ1C, exists Command syntax:
enable_MQFunctions -n database -u userid -p password
-q queuemanager
-force
-novalidate
Command parameters: n u p q Specifies the name of the database that you want to enable. Specifies the user ID to connect to the database. Specifies the password for the user ID. Optional. The queue manager name that supports the transactional MQ user-defined functions. If you do not specify a name, it is the default queue manager, DB2MQ_DEFAULT_MQM. If you use this option, the function assumes the use of a -novalidate parameter.
force Optional. The use of this option allows the utility program to ignore the existing MQ UDFs. In other words, the program drops any existing functions, before recreating any MQ UDFs. Without this option, the command will not proceed after it finds that the MQ UDFs already exist. novalidate Optional. This specifies that there will not be any validation of the DB2 MQSeries functions. v Optional. This is used for transactional and non-transactional user-defined function support. The values can be either all, 0pc, or 1pc. When you specify 0pc, the enablement creates schema db2mq. If you specify 1pc, then the enablement creates schema db2mq1c. If you specify all, then the enablement creates all schemas under user-defined functions (db2mq and db2mq1c). If you do not specify this option, the enablement defaults to the all option.
Examples:
301
enable_MQFunctions
The following example enables the transactional and non-transactional user-defined functions. The user connects to the database SAMPLE.
enable_MQFunctions -n sample -u user1 -p password1
In the next example, the user connects to the database SAMPLE. The example creates DB2MQ1C functions with schema DB2MQ1C.
enable_MQFunctions -n sample -u user1 -p password1 -v 1pc
Usage notes: The DB2 MQ user-defined functions run under the schemas DB2MQ or DB2MQ1C which are automatically created by this command. Before executing this command: v Ensure that WebSphere MQ and WebSphere Application Messaging Interface (AMI) are installed, and that the version of WebSphere MQ is 5.1 or higher. v Ensure that the environment variable $AMT_DATA_PATH is defined. v If you want to use transactional MQ UDFs, make sure that the database is configured for federated operations. Do this with the following command
update dbm cfg using federated yes
v Change the directory to the cfg subdirectory of the DB2PATH On UNIX: v Use db2set to add AMT_DATA_PATH to the DB2ENVLIST. v Ensure that the user account associated with UDF execution is a member of the mqm group. v Ensure that the user who will be calling this command is a member of the mqm group. Note: AIX 4.2 is not supported by MQSeries 5.2. Related concepts: v How to use WebSphere MQ functions within DB2 in Application Development Guide for Federated Systems Related reference: v MQREADALL table function in Administrative SQL Routines and Views v db2mqlsn - MQ listener on page 165 v disable_MQFunctions on page 299 v MQPUBLISH scalar function in Administrative SQL Routines and Views v MQREAD scalar function in Administrative SQL Routines and Views v MQREADALLCLOB table function in Administrative SQL Routines and Views v MQREADCLOB scalar function in Administrative SQL Routines and Views v MQRECEIVE scalar function in Administrative SQL Routines and Views v MQRECEIVEALL table function in Administrative SQL Routines and Views v MQRECEIVEALLCLOB table function in Administrative SQL Routines and Views v MQRECEIVECLOB scalar function in Administrative SQL Routines and Views v MQSEND scalar function in Administrative SQL Routines and Views v MQSUBSCRIBE scalar function in Administrative SQL Routines and Views v MQUNSUBSCRIBE scalar function in Administrative SQL Routines and Views
302
Command Reference
-f -l log-file -t trace-file -h -?
-b base-install-path Specifies the path where the DB2 product that needs to be updated is installed. The length of the path is limited to 128 characters and is a full path name. -c image-location Specifies the image location. Mandatory when a single Fix Pack image spans more than one CD. To indicate multiple image locations, this parameter can be specified multiple times. For example, to indicate that the images are located on multiple CDs, specify -c CD1 -c CD2. -f Force option. This option forces a lower level fix pack image on top of a higher level fix pack image of the installed DB2 product or refreshes installed DB2 products to the same level. If the fix pack image is at a higher level than the installed DB2 product, this option is ignored.
-l log-file Specifies the log file. The default log file is /tmp/installFixPack.log$$, where $$ is the process id. -t trace-file Turns on the debug mode. The debug information is written to the file name specified. -h/-? Displays usage information.
Examples: v To perform an interactive update from GA to FP5 when DB2 ESE German is installed on /opt/ibm/db2/V9.1, from the FP5 image, issue:
./installFixPack
Chapter 1. System Commands
303
If for any reason the installed DB2 product files get corrupted, instead of uninstalling and installing again to refresh the installation, issue:
./installFixPack -f -b full_path_where_DB2_product_installed
Related tasks: v Applying fix packs in Quick Beginnings for DB2 Servers
304
Command Reference
-p install-directory
-t trace-file
-u response-file
-n DB2-copy-name
-? -h
Command parameters: -c Ensures that the setup.exe exits immediately after starting the installation. By selecting this option, the return code of the installation is not available when monitoring the exit code of setup.exe. Forces any DB2 processes to stop before installing.
-f
-i language Specifies the two-letter language code of the language in which to perform the installation. -l log-file Full path and file name of the log file to use. -m Used with -u option to show the progress dialog during the installation. However, it will not prompt for any input.
-p install-directory Changes the installation path of the product. Specifying this option overrides the installation path that is specified in the response file. -t trace-file Generates a file with install trace information. -u response-file Specifies the full path and file name of the response file to use. -n DB2-copy-name Specifies the DB2 copy name that you want the install to use. Specifying this option overrides the installation path that is specified in the response file. -?, -h Generates usage information.
305
306
Command Reference
-compile=true -compile=false
-linemap=NO -linemap=YES
-ser2class
-status
-version
-JJVM-option
SQLJ-source-file-name
Notes: 1 The -C-classpath and -C-sourcepath options are used by the SQLJ translator as well as by the Java compiler. Command parameters: -help Specifies that the SQLJ translator describes each of the options that the translator supports. If any other options are specified with -help, they are ignored. -dir=directory Specifies the name of the directory into which SQLJ puts .java files that are generated by the translator. The default directory is the directory that contains the SQLJ source files. The translator uses the directory structure of the SQLJ source files when it puts the generated files in directories. For example, suppose that you want the translator to process two files: v file1.sqlj, which is not in a Java package v file2.sqlj, which is in Java package sqlj.test Also suppose that you specify the parameter -dir=/src when you invoke the translator. The translator puts the Java source file for file1.sqlj in directory /src and puts the Java source file for file2.sqlj in directory /src/sqlj/test.
307
no Do not generated SMAP files. This is the default. yes Generate SMAP files. An SMAP file name is SQLJ-source-filename.java.smap. The SQLJ translator places the SMAP file in the same directory as the generated Java source file.
308
Command Reference
-C-help Specifies that the SQLJ translator displays help information for the Java compiler. -Ccompiler-option Specifies a valid Java compiler option that begins with a dash (-). Do not include spaces between -C and the compiler option. If you need to specify multiple compiler options, precede each compiler option with -C. For example:
-C-g -C-verbose
All options are passed to the Java compiler and are not used by the SQLJ translator, except for the following options: -classpath Specifies the user class path that is to be used by the SQLJ translator and the Java compiler. This value overrides the CLASSPATH environment variable. -sourcepath Specifies the source code path that the SQLJ translator and the Java compiler search for class or interface definitions. The SQLJ translator searches for .sqlj and .java files only in directories, not in JAR or zip files. -JJVM-option Specifies an option that is to be passed to the Java virtual machine (JVM) in which the sqlj command runs. The option must be a valid JVM option that begins with a dash (-). Do not include spaces between -J and the JVM option. If you need to specify multiple JVM options, precede each compiler option with -J. For example:
-J-Xmx128m -J-Xmine2M
SQLJ-source-file-name Specifies a list of SQLJ source files to be translated. This is a required parameter. All SQLJ source file names must have the extension .sqlj. Output:
Chapter 1. System Commands
309
v If the SQLJ translator invokes the Java compiler, the class files that the compiler generates. Examples:
sqlj -encoding=UTF8 -C-O MyApp.sqlj
Related reference: v db2sqljbind - SQLJ profile binder on page 252 v db2sqljcustomize - SQLJ profile customizer on page 259 v db2sqljprint - SQLJ profile printer on page 270
310
Command Reference
option-flag
-- comment
option-flag Specifies a CLP option flag. db2-command Specifies a DB2 command. sql-statement Specifies an SQL statement. ?
Copyright IBM Corp. 1993, 2006
311
312
Command Reference
-ffilename -i
This option tells the command line processor to read OFF command input from a file instead of from standard input. This option tells the command line processor to pretty print the XML data with proper indentation. This option will only affect the result set of XQuery statements. This option tells the command line processor to log commands in a history file. This option tells the command line processor to print the number of rows affected for INSERT/DELETE/UPDATE/MERGE. OFF
-lfilename -m -n
OFF OFF
Removes the new line character within a single delimited OFF token. If this option is not specified, the new line character is replaced with a space. This option must be used with the -t option. This option tells the command line processor to display output data and messages to standard output. ON
-o -p
This option tells the command line processor to display a ON command line processor prompt when in interactive input mode. This option tells the command line processor to preserve OFF whitespaces and linefeeds in strings delimited with single or double quotes. When option q is ON, option n is ignored. This option tells the command line processor to write the report generated by a command to a file. This option tells the command line processor to stop execution if errors occur while executing commands in a batch file or in interactive mode. This option tells the command line processor to use a semicolon (;) as the statement termination character. OFF OFF
-q
-rfilename -s
-t -tdx or -tdxx
OFF
This option tells the command line processor to define and OFF to use x or xx as the statement termination character or characters (1 or 2 characters in length). This option tells the command line processor to echo command text to standard output. This option tells the command line processor to display FETCH/SELECT warning messages. OFF ON
-v -w
313
-zfilename
This option tells the command line processor to redirect all OFF output to a file. It is similar to the -r option, but includes any messages or error codes with the output.
The following is a detailed description of these options: Show SQLCA Data Option (-a): Displays SQLCA data to standard output after executing a DB2 command or an SQL statement. The SQLCA data is displayed instead of an error or success message. The default setting for this command option is OFF (+a or -a-). The -o and the -r options affect the -a option; see the option descriptions for details. Auto-commit Option (-c): This option specifies whether each command or statement is to be treated independently. If set ON (-c), each command or statement is automatically committed or rolled back. If the command or statement is successful, it and all successful commands and statements that were issued before it with autocommit OFF (+c or -c-) are committed. If, however, the command or statement fails, it and all successful commands and statements that were issued before it with autocommit OFF are rolled back. If set OFF (+c or -c-), COMMIT or ROLLBACK must be issued explicitly, or one of these actions will occur when the next command with autocommit ON (-c) is issued. The default setting for this command option is ON. The auto-commit option does not affect any other command line processor option. Example: Consider the following scenario: 1. db2 create database test 2. db2 connect to test 3. db2 +c "create table a (c1 int)" 4. db2 select c2 from a
314
Command Reference
then returns an empty list. XML Declaration Option (-d): The -d option tells the command line processor whether to retrieve and display XML declarations of XML data. If set ON (-d), the XML declarations will be retrieved and displayed. If set OFF (+d or -d-), the XML declarations will not be retrieved and displayed. The default setting for this command option is OFF. The XML declaration option does not affect any other command line processor options. Display SQLCODE/SQLSTATE Option (-e): The -e{c|s} option tells the command line processor to display the SQLCODE (-ec) or the SQLSTATE (-es) to standard output. Options -ec and -es are not valid in CLP interactive mode. The default setting for this command option is OFF (+e or -e-). The -o and the -r options affect the -e option; see the option descriptions for details. The display SQLCODE/SQLSTATE option does not affect any other command line processor option. Example: To retrieve SQLCODE from the command line processor running on AIX, enter:
sqlcode=)db2 ec +o db2command)
Read from Input File Option (-f): The -ffilename option tells the command line processor to read input from a specified file, instead of from standard input. Filename is an absolute or relative file name which can include the directory path to the file. If the directory path is not specified, the current directory is used. When other options are combined with option -f, option -f must be specified last. For example:
db2 -tvf filename
This option cannot be changed from within the interactive mode. The default setting for this command option is OFF (+f or -f-). Commands are processed until the QUIT command or TERMINATE command is issued, or an end-of-file is encountered. If both this option and a database command are specified, the command line processor does not process any commands, and an error message is returned. Input file lines which begin with the comment characters -- are treated as comments by the command line processor. Comment characters must be the first non-blank characters on a line.
Chapter 2. Command Line Processor (CLP)
315
The default setting for this command option is OFF (+l or -l-). The log commands in history file option does not affect any other command line processor option. Display Number of Rows Affected Option (-m): The -m option tells the command line processor whether or not to print the number of rows affected for INSERT, DELETE, UPDATE, or MERGE. If set ON (-m), the number of rows affected will be displayed for the statement of INSERT/DELETE/UPDATE/MERGE. If set OFF (+m or -m-), the number of rows affected will not be displayed. For other statements, this option will be ignored. The default setting for this command option is OFF. The -o and the -r options affect the -m option; see the option descriptions for details. Remove New Line Character Option (-n): Removes the new line character within a single delimited token. If this option is not specified, the new line character is replaced with a space. This option cannot be changed from within the interactive mode. The default setting for this command option is OFF (+n or -n-).
316
Command Reference
The -p option is ignored if the -ffilename option is specified. The display DB2 interactive prompt option does not affect any other command line processor option. Preserve Whitespaces and Linefeeds Option (-q): The -q option tells the command line processor to preserve whitespaces and linefeeds in strings delimited with single or double quotes. The default setting for this command option is OFF (+q or -q-). If option -q is ON, option -n is ignored. Save to Report File Option (-r): The -rfilename option causes any output data generated by a command to be written to a specified file, and is useful for capturing a report that would otherwise scroll off the screen. Messages or error codes are not written to the file. Filename is an absolute or relative file name which can include the directory path to the file. If the directory path is not specified, the current directory is used. New report entries are appended to the file.
Chapter 2. Command Line Processor (CLP)
317
Statement Termination Character Option (-t): The -t option tells the command line processor to use a semicolon (;) as the statement termination character, and disables the backslash (\) line continuation character. This option cannot be changed from within the interactive mode. The default setting for this command option is OFF (+t or -t-). To define termination characters 1 or 2 characters in length, use -td followed by the chosen character or characters. For example, -td%% sets %% as the statement termination characters. Alternatively, use the #SET TERMINATOR directive to set the statement termination characters. For example, #SET TERMINATOR%% sets %% as the statement termination characters. The termination character cannot be used to concatenate multiple statements from the command line, since only the last non-blank character on each input line is checked for a termination symbol. The statement termination character option does not affect any other command line processor option. Verbose Output Option (-v): The -v option causes the command line processor to echo (to standard
318
Command Reference
319
The return code can be one of the following: Code 0 1 2 4 8 Description DB2 command or SQL statement executed successfully SELECT or FETCH statement returned no rows DB2 command or SQL statement warning DB2 command or SQL statement error Command line processor system error
The command line processor does not provide a return code while a user is executing statements from interactive mode, or while input is being read from a file (using the -f option). A return code is available only after the user quits interactive mode, or when processing of an input file ends. In these cases, the return code is the logical OR of the distinct codes returned from the individual commands or statements executed to that point. For example, if a user in interactive mode issues commands resulting in return codes of 0, 1, and 2, a return code of 3 will be returned after the user quits interactive mode. The individual codes 0, 1, and 2 are not returned. Return code 3 tells the user that during interactive mode processing, one or more commands returned a 1, and one or more commands returned a 2. A return code of 4 results from a negative SQLCODE returned by a DB2 command or an SQL statement. A return code of 8 results only if the command line processor encounters a system error. If commands are issued from an input file or in interactive mode, and the command line processor experiences a system error (return code 8), command execution is halted immediately. If one or more DB2 commands or SQL statements end in error (return code 4), command execution stops if the -s (Stop Execution on Command Error) option is set; otherwise, execution continues. Related reference: v db2 - Command line processor invocation on page 311 v Command line processor options on page 312
320
Command Reference
sets the tm_database field to NULL. This operation is case sensitive. A lowercase null is not interpreted as a null string, but rather as a string containing the letters null. Customizing the Command Line Processor: It is possible to customize the interactive input prompt by using the DB2_CLPPROMPT registry variable. This registry variable can be set to any text string of maximum length 100 and can contain the tokens %i, %ia, %d, %da and %n. Specific values will be substituted for these tokens at run-time.
Table 5. DB2_CLPPROMPT tokens and run-time values DB2_CLPPROMPT token %ia %i Value at run-time Authorization ID of the current instance attachment Local alias of the currently attached instance. If no instance attachment exists, the value of the DB2INSTANCE registry variable. On Windows platforms only, if the DB2INSTANCE registry variable is not set, the value of the DB2INSTDEF registry variable. Authorization ID of the current database connection Local alias of the currently connected database. If no database connection exists, the value of the DB2DBDFT registry variable. New line
Chapter 2. Command Line Processor (CLP)
%da %d
%n
321
v If any token has no associated value at run-time, the empty string is substituted for that token. v The interactive input prompt will always present the authorization IDs, database names, and instance names in upper case, so as to be consistent with the connection and attachment information displayed at the prompt. v If the DB2_CLPPROMPT registry variable is changed within CLP interactive mode, the new value of DB2_CLPPROMPT will not take effect until CLP interactive mode has been closed and reopened. Examples: If DB2_CLPPROMPT is defined as (%ia@%i, %da@%d), the input prompt will have the following values: v No instance attachment and no database connection. DB2INSTANCE set to DB2. DB2DBDFT is not set.
(@DB2, @)
v (Windows) No instance attachment and no database connection. DB2INSTANCE and DB2DBDFT not set. DB2INSTDEF set to DB2.
(@DB2, @)
No instance attachment and no database connection. DB2INSTANCE set to DB2. DB2DBDFT set to SAMPLE.
(@DB2, @SAMPLE)
v Instance attachment to instance DB2 with authorization ID tyronnem. DB2INSTANCE set to DB2. DB2DBDFT set to SAMPLE.
(TYRONNEM@DB2, @SAMPLE)
v Database connection to database sample with authorization ID horman. DB2INSTANCE set to DB2. DB2DBDFT set to SAMPLE.
(@DB2, HORMAN@SAMPLE)
v Instance attachment to instance DB2 with authorization ID tyronnem. Database connection to database sample with authorization ID horman. DB2INSTANCE set to DB2. DB2DBDFT not set.
(TYRONNEM@DB2, HORMAN@SAMPLE)
Using the Command Line Processor in Command Files: CLP requests to the database manager can be imbedded in a shell script command file. The following example shows how to enter the CREATE TABLE statement in a shell script command file:
db2 create table mytable (name VARCHAR(20), color CHAR(10))
For more information about commands and command files, see the appropriate operating system manual. Command Line Processor Design: The command line processor consists of two processes: the front-end process (the DB2 command), which acts as the user interface, and the back-end process (db2bp), which maintains a database connection. Maintaining Database Connections
322
Command Reference
DB2BQTIME When the command line processor is invoked, the front-end process checks if the back-end process is already active. If it is active, the front-end process reestablishes a connection to it. If it is not active, the front-end process activates it. The front-end process then idles for the duration specified by the DB2BQTIME variable, and checks again. The front-end process continues to check for the number of times specified by the DB2BQTRY variable, after which, if the back-end process is still not active, it times out and returns an error message. DB2BQTRY Works in conjunction with the DB2BQTIME variable, and specifies the number of times the front-end process tries to determine whether the back-end process is active.
323
324
Command Reference
is interpreted as select <the names of all files> from org where division. The result, an SQL syntax error, is redirected to the file Eastern. The following syntax produces the correct output:
db2 "select * from org where division > 'Eastern'"
Special characters vary from platform to platform. In the AIX Korn shell, the above example could be rewritten using an escape character (\), such as \*, \>, or \'. Most operating system environments allow input and output to be redirected. For example, if a connection to the SAMPLE database has been made, the following request queries the STAFF table, and sends the output to a file named staflist.txt in the mydata directory:
db2 "select * from staff" > mydata/staflist.txt
For environments where output redirection is not supported, CLP options can be used. For example, the request can be rewritten as
db2 -r mydata\staflist.txt "select * from staff" db2 -z mydata\staflist.txt "select * from staff"
The command line processor is not a programming language. For example, it does not support host variables, and the statement,
db2 connect to :HostVar in share mode
is syntactically incorrect, because :HostVar is not a valid database name. The command line processor represents SQL NULL values as hyphens (-). If the column is numeric, the hyphen is placed at the right of the column. If the column is not numeric, the hyphen is at the left. To correctly display the national characters for single byte (SBCS) languages from the DB2 command line processor window, a True Type font must be selected. For example, in a Windows environment, open the command window properties notebook and select a font such as Lucinda Console. Related concepts: v DB2 Command Line Processor (CLP) in Developing SQL and External Routines
Chapter 2. Command Line Processor (CLP)
325
Command Line Processor Help Invoking message help from the command line processor
Message help describes the cause of a message and describes any action you should take in response to the error. Procedure: To invoke message help, open the command line processor and enter:
? XXXnnnnn
where XXXnnnnn represents a valid message identifier. For example, ? SQL30081 displays help about the SQL30081 message. Related concepts: v Introduction to Messages in Message Reference Volume 1 Related reference: v db2 - Command line processor invocation on page 311
where command represents a keyword or the entire command. For example, ? catalog displays help for all of the CATALOG commands, while ? catalog database displays help only for the CATALOG DATABASE command. Related tasks: v Invoking message help from the command line processor on page 326 v Accessing help from a DB2 tool, window, wizard or advisor in Online DB2 Information Center v Displaying SQL state help from the command line processor on page 833 v Starting the DB2 Information Center in Online DB2 Information Center Related reference: v db2 - Command line processor invocation on page 311
326
Command Reference
327
328
Command Reference
329
ACTIVATE DATABASE
Activates the specified database and starts up all necessary database services, so that the database is available for connection and use by any application. Scope:
330
Command Reference
ACTIVATE DATABASE
This command activates the specified database on all nodes within the system. If one or more of these nodes encounters an error during activation of the database, a warning is returned. The database remains activated on all nodes on which the command has succeeded. Authorization: One of the following: v sysadm v sysctrl v sysmaint Required connection: None Command syntax:
ACTIVATE DATABASE DB database-alias
Command parameters: database-alias Specifies the alias of the database to be started. USER username Specifies the user starting the database. USING password Specifies the password for the user name. Usage notes: If a database has not been started, and a CONNECT TO (or an implicit connect) is issued in an application, the application must wait while the database manager starts the required database, before it can do any work with that database. However, once the database is started, other applications can simply connect and use it without spending time on its start up. Database administrators can use ACTIVATE DATABASE to start up selected databases. This eliminates any application time spent on database initialization. Databases initialized by ACTIVATE DATABASE can be shut down using the DEACTIVATE DATABASE command, or using the db2stop command. If a database was started by a CONNECT TO (or an implicit connect) and subsequently an ACTIVATE DATABASE is issued for that same database, then DEACTIVATE DATABASE must be used to shut down that database. If ACTIVATE DATABASE was not used to start the database, the database will shut down when the last application disconnects.
331
ACTIVATE DATABASE
ACTIVATE DATABASE behaves in a similar manner to a CONNECT TO (or an implicit connect) when working with a database requiring a restart (for example, database in an inconsistent state). The database will be restarted before it can be initialized by ACTIVATE DATABASE. Restart will only be performed if the database is configured to have AUTORESTART ON. The application issuing the ACTIVATE DATABASE command cannot have an active database connection to any database. Related concepts: v Quick-start tips for performance tuning in Performance Guide Related reference: v STOP DATABASE MANAGER on page 736 v DEACTIVATE DATABASE on page 411 v sqle_activate_db API - Activate database in Administrative API Reference
332
Command Reference
ADD CONTACT
ADD CONTACT
The command adds a contact to the contact list which can be either defined locally on the system or in a global list. Contacts are users to whom processes such as the Scheduler and Health Monitor send messages. The setting of the Database Administration Server (DAS) contact_host configuration parameter determines whether the list is local or global. Authorization: None. Required connection: None. Local execution only: this command cannot be used with a remote connection. Command syntax:
ADD CONTACT name TYPE EMAIL PAGE MAXIMUM PAGE LENGTH MAX LEN ADDRESS recipients address DESCRIPTION contact description pg-length
Command parameters: CONTACT name The name of the contact that will be added. By default the contact will be added in the local system, unless the DB2 administration server configuration parameter contact_host points to another system. TYPE Method of contact, which must be one of the following two: EMAIL This contact wishes to be notified by e-mail at (ADDRESS). PAGE This contact wishes to be notified by a page sent to ADDRESS. MAXIMUM PAGE LENGTH pg-length If the paging service has a message-length restriction, it is specified here in characters. The notification system uses the SMTP protocol to send the notification to the mail server specified by the DB2 Administration Server configuration parameter smtp_server. It is the responsibility of the SMTP server to send the e-mail or call the pager. ADDRESS recipients-address The SMTP mailbox address of the recipient. For example, [email protected]. The smtp_server DAS configuration parameter must be set to the name of the SMTP server. DESCRIPTION contact description A textual description of the contact. This has a maximum length of 128 characters.
333
ADD CONTACT
Related tasks: v Enabling health alert notification in System Monitor Guide and Reference Related reference: v db2AddContact API - Add a contact to whom notification messages can be sent in Administrative API Reference v ADD CONTACT command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
334
Command Reference
ADD CONTACTGROUP
ADD CONTACTGROUP
Adds a new contact group to the list of groups defined on the local system. A contact group is a list of users and groups to whom monitoring processes such as the Scheduler and Health Monitor can send messages. The setting of the Database Administration Server (DAS) contact_host configuration parameter determines whether the list is local or global. Authorization: None Required connection: None. Local execution only: this command cannot be used with a remote connection. Command Syntax:
, ADD CONTACTGROUP name CONTACT GROUP name
Command Parameters: CONTACTGROUP name Name of the new contact group, which must be unique among the set of groups on the system. CONTACT name Name of the contact which is a member of the group. A contact can be defined with the ADD CONTACT command after it has been added to a group. GROUP name Name of the contact group of which this group is a member. DESCRIPTION group description Optional. A textual description of the contact group. Related tasks: v Enabling health alert notification in System Monitor Guide and Reference Related reference: v db2AddContactGroup API - Add a contact group to whom notification messages can be sent in Administrative API Reference v ADD CONTACTGROUP command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
335
ADD DBPARTITIONNUM
ADD DBPARTITIONNUM
Adds a new database partition server to the partitioned database environment. This command also creates a database partition for all databases on the new database partition server. The user can specify the source database partition server for the definitions of any system temporary table spaces to be created with the new database partition, or specify that no system temporary table spaces are to be created. The command must be issued from the database partition server that is being added. Scope: This command only affects the machine on which it is executed. Authorization: One of the following: v sysadm v sysctrl Required connection: None Command syntax:
ADD DBPARTITIONNUM LIKE DBPARTITIONNUM WITHOUT TABLESPACES db-partition-number
Command parameters: LIKE DBPARTITIONNUM db-partition-number Specifies that the containers for the new system temporary table spaces are the same as the containers of the database at the database partition server specified by db-partition-number. The database partition server specified must already be defined in the db2nodes.cfg file. For system temporary table spaces that are defined to use automatic storage (in other words, system temporary table spaces that were created with the MANAGED BY AUTOMATIC STORAGE clause of the CREATE TABLESPACE statement or where no MANAGED BY CLAUSE was specified at all), the containers will not necessarily match those from the partition specified. Instead, containers will automatically be assigned by the database manager based on the storage paths that are associated with the database. This may or may not result in the same containers being used on these two partitions. WITHOUT TABLESPACES Specifies that containers for the system temporary table spaces are not created for any of the database partitions. The ALTER TABLESPACE statement must be used to add system temporary table space containers to each database partition before the database can be used. If no option is specified, containers for the system temporary table spaces will be the same as the containers on the catalog partition for each database. The catalog partition can be a different database partition for
336
Command Reference
ADD DBPARTITIONNUM
each database in the partitioned database environment. This option is ignored for system temporary table spaces that are defined to use automatic storage (in other words, system temporary table spaces that were created with the MANAGED BY AUTOMATIC STORAGE clause of the CREATE TABLESPACE statement or where no MANAGED BY CLAUSE was specified at all). For these table spaces, there is no way to defer container creation. Containers will automatically be assigned by the database manager based on the storage paths that are associated with the database. Usage notes: Before adding a new database partition server, ensure that there is sufficient storage for the containers that must be created for all databases in the instance. The add database partition server operation creates an empty database partition for every database that exists in the instance. The configuration parameters for the new database partitions are set to the default values. If an add database partition server operation fails while creating a database partition locally, it enters a clean-up phase, in which it locally drops all databases that have been created. This means that the database partitions are removed only from the database partition server being added. Existing database partitions remain unaffected on all other database partition servers. If the clean-up phase fails, no further clean up is done, and an error is returned. The database partitions on the new database partition cannot contain user data until after the ALTER DATABASE PARTITION GROUP statement has been used to add the database partition to a database partition group. This command will fail if a create database or a drop database operation is in progress. The command can be reissued once the competing operation has completed. This command will fail, if at any time in a database in the system a user table with an XML column has been created, successfully or not, or an XSR object has been registered, successfully or not. To determine whether or not a database is enabled for automatic storage, ADD DBPARTITIONNUM has to communicate with the catalog partition for each of the databases in the instance. If automatic storage is enabled then the storage path definitions are retrieved as part of that communication. Likewise, if system temporary table spaces are to be created with the database partitions, ADD DBPARTITIONNUM might have to communicate with another database partition server to retrieve the table space definitions for the database partitions that reside on that server. The start_stop_time database manager configuration parameter is used to specify the time, in minutes, by which the other database partition server must respond with the automatic storage and table space definitions. If this time is exceeded, the command fails. If this situation occurs, increase the value of start_stop_time, and reissue the command. Compatibilities: For compatibility with versions earlier than Version 8: v The keyword NODE can be substituted for DBPARTITIONNUM. Related concepts:
Chapter 3. CLP Commands
337
ADD DBPARTITIONNUM
v Automatic storage databases in Administration Guide: Implementation Related reference: v START DATABASE MANAGER on page 728
338
Command Reference
Description: TO relational-identifier Specifies the relational name of a registered but incomplete XML schema to which additional schema documents are added. ADD document-URI Specifies the uniform resource identifier (URI) of an XML schema document to be added to this schema, as the document would be referenced from another XML document. FROM content-URI Specifies the URI where the XML schema document is located. Only a file scheme URI is supported. WITH properties-URI Specifies the URI of a properties document for the XML schema. Only a file scheme URI is supported. COMPLETE Indicates that there are no more XML schema documents to be added. If specified, the schema is validated and marked as usable if no errors are found. WITH schema-properties-URI Specifies the URI of a properties document for the XML schema. Only a file scheme URI is supported.
Chapter 3. CLP Commands
339
Related reference: v COMPLETE XMLSCHEMA on page 394 v REGISTER XMLSCHEMA on page 640
340
Command Reference
ARCHIVE LOG
ARCHIVE LOG
Closes and truncates the active log file for a recoverable database. Authorization: One of the following: v sysadm v sysctrl v sysmaint v dbadm Required connection: None. This command establishes a database connection for the duration of the command. Command syntax:
ARCHIVE LOG FOR DATABASE DB database-alias
, ( db-partition-number TO db-partition-number )
Command parameters: DATABASE database-alias Specifies the alias of the database whose active log is to be archived. USER username Identifies the user name under which a connection will be attempted.
341
ARCHIVE LOG
USING password Specifies the password to authenticate the user name. ON ALL DBPARTITIONNUMS Specifies that the command should be issued on all database partitions in the db2nodes.cfg file. This is the default if a database partition number clause is not specified. EXCEPT Specifies that the command should be issued on all database partitions in the db2nodes.cfg file, except those specified in the database partition number list. ON DBPARTITIONNUM/ON DBPARTITIONNUMS Specifies that the logs should be archived for the specified database on a set of database partitions. db-partition-number Specifies a database partition number in the database partition number list. TO db-partition-number Used when specifying a range of database partitions for which the logs should be archived. All database partitions from the first database partition number specified up to and including the second database partition number specified are included in the database partition number list. Usage notes: This command can be used to collect a complete set of log files up to a known point. The log files can then be used to update a standby database. This command can only be executed when the invoking application or shell does not have a database connection to the specified database. This prevents a user from executing the command with uncommitted transactions. As such, the ARCHIVE LOG command will not forcibly commit the users incomplete transactions. If the invoking application or shell already has a database connection to the specified database, the command will terminate and return an error. If another application has transactions in progress with the specified database when this command is executed, there will be a slight performance degradation since the command flushes the log buffer to disk. Any other transactions attempting to write log records to the buffer will have to wait until the flush is complete. If used in a partitioned database environment, a subset of database partitions can be specified by using a database partition number clause. If the database partition number clause is not specified, the default behavior for this command is to close and archive the active log on all database partitions. Using this command will use up a portion of the active log space due to the truncation of the active log file. The active log space will resume its previous size when the truncated log becomes inactive. Frequent use of this command can drastically reduce the amount of the active log space available for transactions. Compatibilities: For compatibility with versions earlier than Version 8: v The keyword NODE can be substituted for DBPARTITIONNUM. v The keyword NODES can be substituted for DBPARTITIONNUMS.
342
Command Reference
ARCHIVE LOG
Related reference: v db2ArchiveLog API - Archive the active log file in Administrative API Reference
343
ATTACH
ATTACH
Enables an application to specify the instance at which instance-level commands (CREATE DATABASE and FORCE APPLICATION, for example) are to be executed. This instance can be the current instance, another instance on the same workstation, or an instance on a remote workstation. Authorization: None Required connection: None. This command establishes an instance attachment. Command syntax:
ATTACH TO nodename USER username USING password NEW password CONFIRM password CHANGE PASSWORD
Command parameters: TO nodename Alias of the instance to which the user wants to attach. This instance must have a matching entry in the local node directory. The only exception to this is the local instance (as specified by the DB2INSTANCE environment variable) which can be specified as the object of an attach, but which cannot be used as a node name in the node directory. USER username Specifies the authentication identifier. When attaching to a DB2 database instance on a Windows operating system, the user name can be specified in a format compatible with Microsoft Security Account Manager (SAM). The qualifier must be a NetBIOS-style name, which has a maximum length of 15 characters. For example, domainname\username. USING password Specifies the password for the user name. If a user name is specified, but a password is not specified, the user is prompted for the current password. The password is not displayed at entry. NEW password Specifies the new password that is to be assigned to the user name. Passwords can be up to 18 characters in length. The system on which the password will be changed depends on how user authentication has been set up. CONFIRM password A string that must be identical to the new password. This parameter is used to catch entry errors.
344
Command Reference
ATTACH
CHANGE PASSWORD If this option is specified, the user is prompted for the current password, a new password, and for confirmation of the new password. Passwords are not displayed at entry. Examples: Catalog two remote nodes:
db2 catalog tcpip node node1 remote freedom server server1 db2 catalog tcpip node node2 remote flash server server1
Attach to the first node, force all users, and then detach:
db2 attach to node1 db2 force application all db2 detach
After the command returns agent IDs 1, 2 and 3, force 1 and 3, and then detach:
db2 force application (1, 3) db2 detach
Attach to the current instance (not necessary, will be implicit), force all users, then detach (AIX only):
db2 attach to $DB2INSTANCE db2 force application all db2 detach
Usage notes: If nodename is omitted from the command, information about the current state of attachment is returned. If ATTACH has not been executed, instance-level commands are executed against the current instance, specified by the DB2INSTANCE environment variable. Related tasks: v Attaching to and detaching from a non-default instance of the database manager in Administration Guide: Implementation Related reference: v DETACH on page 423 v sqleatcp API - Attach to instance and change password in Administrative API Reference v sqleatin API - Attach to instance in Administrative API Reference
345
AUTOCONFIGURE
AUTOCONFIGURE
Calculates and displays initial values for the buffer pool size, database configuration and database manager configuration parameters, with the option of applying these recommended values. Authorization: sysadm. Required connection: Database. Command syntax:
AUTOCONFIGURE
input-keyword param-value
ON CURRENT NODE
workload_type
mixed
num_stmts
11 000 000
10
tpm
1200 000
60
346
Command Reference
AUTOCONFIGURE
Table 8. Valid input keywords and parameter values (continued) Keyword admin_priority Valid values performance, recovery, both Default value both Explanation Optimize for better performance (more transactions per minute) or better recovery time Is the database populated with data? Number of connected local applications Number of connected remote applications Maximum isolation level of applications connecting to this database (Repeatable Read, Read Stability, Cursor Stability, Uncommitted Read). It is only used to determine values of other configuration parameters. Nothing is set to restrict the applications to a particular isolation level and it is safe to use the default value. Are buffer pools resizeable?
yes 0 10 RR
bp_resizeable
yes, no
yes
APPLY DB ONLY Displays the recommended values for the database configuration and the buffer pool settings based on the current database manager configuration. Applies the recommended changes to the database configuration and the buffer pool settings. DB AND DBM Displays and applies the recommended changes to the database manager configuration, the database configuration, and the buffer pool settings. NONE Displays the recommended changes, but does not apply them. ON CURRENT NODE In the Database Partitioning Feature (DPF), the Configuration Advisor updates the database configuration on all nodes by default. Running with the ON CURRENT NODE option makes the advisor apply the recommended database configuration to the coordinator (connection) node only. The bufferpool changes are always applied to the system catalogs. Thus, all nodes are affected. The ON CURRENT NODE option does not matter for bufferpool recommendations.
Chapter 3. CLP Commands
347
AUTOCONFIGURE
Usage notes: v On systems with multiple logical partitions, the mem_percent parameter refers to the percentage of memory that is to be used by all logical partitions. For example, if DB2 uses 25% of the memory on the system, specify 25% regardless of the number of logical partitions. The database configuration recommendations made, however, will be adjusted for one logical partition. v This command makes configuration recommendations for the currently connected database, assuming that the database is the only active database on the system. If more than one database is active on the system, adjust the >mem_percent parameter to reflect the current databases share of memory. For example, if the DB2 database uses 80% of the systems memory and there are two active databases on the system that should share the resources equally, specify 40% (80% divided by 2 databases) for the parameter mem_percent. v When explicitly invoking the Configuration Advisor with the AUTOCONFIGURE command, the setting of the DB2_ENABLE_AUTOCONFIG_DEFAULT registry variable will be ignored. v Running the AUTOCONFIGURE command on a database will recommend enablement of the Self Tuning Memory Manager. However, if you run the AUTOCONFIGURE command on a database in an instance where SHEAPTHRES is not zero, sort memory tuning (SORTHEAP) will not be enabled automatically. To enable sort memory tuning (SORTHEAP), you must set SHEAPTHRES equal to zero using the UPDATE DATABASE MANAGER CONFIGURATION command. Note that changing the value of SHEAPTHRES may affect the sort memory usage in your previously existing databases. Related concepts: v Configuration parameters in Performance Guide Related tasks: v Defining the scope of configuration parameters using the Configuration Advisor in Administration Guide: Implementation v Configuring DB2 with configuration parameters in Performance Guide Related reference: v db2AutoConfig API - Access the Configuration Advisor in Administrative API Reference v ADMIN_CMD procedure Run administrative commands in Administrative SQL Routines and Views v AUTOCONFIGURE command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
348
Command Reference
BACKUP DATABASE
BACKUP DATABASE
Creates a backup copy of a database or a table space. For information on the backup operations supported by DB2 database systems between different operating systems and hardware platforms, see Backup and restore operations between different operating systems and hardware platforms in the Related concepts section. Scope: This command only affects the database partition on which it is executed. Authorization: One of the following: v sysadm v sysctrl v sysmaint Required connection: Database. This command automatically establishes a connection to the specified database. If a connection to the specified database already exists, that connection will be terminated and a new connection established specifically for the backup operation. The connection is terminated at the completion of the backup operation. Command syntax:
BACKUP DATABASE DB database-alias USER username USING password
, TABLESPACE ( tablespace-name )
ONLINE
INCREMENTAL DELTA
USE
TSM XBSA ,
OPTIONS
options-string @ file-name
TO
dir dev LOAD library-name OPTIONS options-string @ file-name OPEN num-sessions SESSIONS
BUFFER buffer-size
PARALLELISM n
349
BACKUP DATABASE
EXCLUDE LOGS UTIL_IMPACT_PRIORITY priority INCLUDE LOGS WITHOUT PROMPTING
Command parameters: DATABASE database-alias Specifies the alias of the database to back up. USER username Identifies the user name under which to back up the database. USING password The password used to authenticate the user name. If the password is omitted, the user is prompted to enter it. TABLESPACE tablespace-name A list of names used to specify the table spaces to be backed up. ONLINE Specifies online backup. The default is offline backup. Online backups are only available for databases configured with logretain or userexit enabled. During an online backup, DB2 obtains IN (Intent None) locks on all tables existing in SMS table spaces as they are processed and S (Share) locks on LOB data in SMS table spaces. INCREMENTAL Specifies a cumulative (incremental) backup image. An incremental backup image is a copy of all database data that has changed since the most recent successful, full backup operation. DELTA Specifies a non-cumulative (delta) backup image. A delta backup image is a copy of all database data that has changed since the most recent successful backup operation of any type. USE TSM Specifies that the backup is to use Tivoli Storage Manager output. USE XBSA Specifies that the XBSA interface is to be used. Backup Services APIs (XBSA) are an open application programming interface for applications or facilities needing data storage management for backup or archiving purposes. OPTIONS options-string Specifies options to be used for the backup operation.The string will be passed to the vendor support library, for example TSM, exactly as it was entered, without the quotes. Specifying this option overrides the value specified by the VENDOROPT database configuration parameter. @file-name Specifies that the options to be used for the backup operation are contained in a file located on the DB2 server. The string will be passed to the vendor support library, for example TSM. The file must be a fully qualified file name. OPEN num-sessions SESSIONS The number of I/O sessions to be created between DB2 and TSM or
350
Command Reference
BACKUP DATABASE
another backup vendor product. This parameter has no effect when backing up to tape, disk, or other local device. TO dir/dev A list of directory or tape device names.The full path on which the directory resides must be specified. If USE TSM, TO, and LOAD are omitted, the default target directory for the backup image is the current working directory of the client computer. This target directory or device must exist on the database server. This parameter can be repeated to specify the target directories and devices that the backup image will span. If more than one target is specified (target1, target2, and target3, for example), target1 will be opened first. The media header and special files (including the configuration file, table space table, and history file) are placed in target1. All remaining targets are opened, and are then used in parallel during the backup operation. Because there is no general tape support on Windows operating systems, each type of tape device requires a unique device driver. To back up to the FAT file system on Windows operating systems, users must conform to the 8.3 naming restriction. Use of tape devices or floppy disks might generate messages and prompts for user input. Valid response options are: c d t Continue. Continue using the device that generated the warning message (for example, when a new tape has been mounted) Device terminate. Stop using only the device that generated the warning message (for example, when there are no more tapes) Terminate. Abort the backup operation.
If the tape system does not support the ability to uniquely reference a backup image, it is recommended that multiple backup copies of the same database not be kept on the same tape. LOAD library-name The name of the shared library (DLL on Windows operating systems) containing the vendor backup and restore I/O functions to be used. It can contain the full path. If the full path is not given, it will default to the path on which the user exit program resides. WITH num-buffers BUFFERS The number of buffers to be used. DB2 will automatically choose an optimal value for this parameter unless you explicitly enter a value. However, when creating a backup to multiple locations, a larger number of buffers can be used to improve performance. BUFFER buffer-size The size, in 4 KB pages, of the buffer used when building the backup image. DB2 will automatically choose an optimal value for this parameter unless you explicitly enter a value. The minimum value for this parameter is 8 pages. If using tape with variable block size, reduce the buffer size to within the range that the tape device supports. Otherwise, the backup operation might succeed, but the resulting image might not be recoverable. With most versions of Linux, using DB2s default buffer size for backup operations to a SCSI tape device results in error SQL2025N, reason code 75. To prevent the overflow of Linux internal SCSI buffers, use this formula:
bufferpages <= ST_MAX_BUFFERS * ST_BUFFER_BLOCKS / 4
351
BACKUP DATABASE
where bufferpages is the value you want to use with the BUFFER parameter, and ST_MAX_BUFFERS and ST_BUFFER_BLOCKS are defined in the Linux kernel under the drivers/scsi directory. PARALLELISM n Determines the number of table spaces which can be read in parallel by the backup utility. DB2 will automatically choose an optimal value for this parameter unless you explicitly enter a value. UTIL_IMPACT_PRIORITY priority Specifies that the backup will run in throttled mode, with the priority specified. Throttling allows you to regulate the performance impact of the backup operation. Priority can be any number between 1 and 100, with 1 representing the lowest priority, and 100 representing the highest priority. If the UTIL_IMPACT_PRIORITY keyword is specified with no priority, the backup will run with the default priority of 50. If UTIL_IMPACT_PRIORITY is not specified, the backup will run in unthrottled mode. An impact policy must be defined by setting the util_impact_lim configuration parameter for a backup to run in throttled mode. COMPRESS Indicates that the backup is to be compressed. COMPRLIB name Indicates the name of the library to be used to perform the compression. The name must be a fully qualified path referring to a file on the server. If this parameter is not specified, the default DB2 compression library will be used. If the specified library cannot be loaded, the backup will fail. EXCLUDE Indicates that the compression library will not be stored in the backup image. COMPROPTS string Describes a block of binary data that will be passed to the initialization routine in the compression library. DB2 will pass this string directly from the client to the server, so any issues of byte reversal or code page conversion will have to be handled by the compression library. If the first character of the data block is @, the remainder of the data will be interpreted by DB2 as the name of a file residing on the server. DB2 will then replace the contents of string with the contents of this file and will pass this new value to the initialization routine instead. The maximum length for string is 1024 bytes. EXCLUDE LOGS Specifies that the backup image should not include any log files. When performing an offline backup operation, logs are excluded whether or not this option is specified. INCLUDE LOGS Specifies that the backup image should include the range of log files required to restore and roll forward this image to some consistent point in time. This option is not valid for an offline backup. WITHOUT PROMPTING Specifies that the backup will run unattended, and that any actions which normally require user intervention will return an error message.
352
Command Reference
BACKUP DATABASE
Examples: 1. In the following example, the database WSDB is defined on all 4 database partitions, numbered 0 through 3. The path /dev3/backup is accessible from all database partitions. Database partition 0 is the catalog partition, and needs to be backed-up separately since this is an offline backup. To perform an offline backup of all the WSDB database partitions to /dev3/backup, issue the following commands from one of the database partitions:
db2_all <<+0< db2 BACKUP DATABASE wsdb TO /dev3/backup db2_all |<<-0< db2 BACKUP DATABASE wsdb TO /dev3/backup
In the second command, the db2_all utility will issue the same backup command to each database partition in turn (except database partition 0). All four database partition backup images will be stored in the /dev3/backup directory. 2. In the following example database SAMPLE is backed up to a TSM server using two concurrent TSM client sessions. DB2 calculates the optimal buffer size for this environment.
db2 backup database sample use tsm open 2 sessions with 4 buffers
3. In the following example, a table space-level backup of table spaces (syscatspace, userspace1) of database payroll is done to tapes.
db2 backup database payroll tablespace (syscatspace, userspace1) to /dev/rmt0, /dev/rmt1 with 8 buffers without prompting
4. The USE TSM OPTIONS keywords can be used to specify the TSM information to use for the backup operation. The following example shows how to use the USE TSM OPTIONS keywords to specify a fully qualified file name:
db2 backup db sample use TSM options @/u/dmcinnis/myoptions.txt
The file myoptions.txt contains the following information: -fromnode=bar -fromowner=dmcinnis 5. Following is a sample weekly incremental backup strategy for a recoverable database. It includes a weekly full database backup operation, a daily non-cumulative (delta) backup operation, and a mid-week cumulative (incremental) backup operation:
(Sun) (Mon) (Tue) (Wed) (Thu) (Fri) (Sat) db2 db2 db2 db2 db2 db2 db2 backup backup backup backup backup backup backup db db db db db db db sample sample sample sample sample sample sample use tsm online incremental online incremental online incremental online incremental online incremental online incremental delta use delta use use tsm delta use delta use use tsm tsm tsm tsm tsm
6. In the following example, three identical target directories are specified for a backup operation on database SAMPLE. You might want to do this if the target file system is made up of multiple physical disks.
db2 backup database sample to /dev3/backup, /dev3/backup, /dev3/backup
The data will be concurrently backed up to the three target directories, and three backup images will be generated with extensions .001, .002, and .003. Usage notes: The data in a backup cannot be protected by the database server. Make sure that backups are properly safeguarded, particularly if the backup contains LBAC-protected data.
353
BACKUP DATABASE
When backing up to tape, use of a variable block size is currently not supported. If you must use this option, ensure that you have well tested procedures in place that enable you to recover successfully, using backup images that were created with a variable block size. When using a variable block size, you must specify a backup buffer size that is less than or equal to the maximum limit for the tape devices that you are using. For optimal performance, the buffer size must be equal to the maximum block size limit of the device being used. Related concepts: v Backup and restore operations between different operating systems and hardware platforms in Data Recovery and High Availability Guide and Reference v Developing a backup and recovery strategy in Data Recovery and High Availability Guide and Reference Related tasks: v Using backup in Data Recovery and High Availability Guide and Reference Related reference: v BACKUP DATABASE command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
354
Command Reference
BIND
BIND
Invokes the bind utility, which prepares SQL statements stored in the bind file generated by the precompiler, and creates a package that is stored in the database. Scope: This command can be issued from any database partition in db2nodes.cfg. It updates the database catalogs on the catalog database partition. Its effects are visible to all database partitions. Authorization: One of the following: v sysadm or dbadm authority v BINDADD privilege if a package does not exist and one of: IMPLICIT_SCHEMA authority on the database if the schema name of the package does not exist CREATEIN privilege on the schema if the schema name of the package exists v ALTERIN privilege on the schema if the package exists v BIND privilege on the package if it exists. The user also needs all privileges required to compile any static SQL statements in the application. Privileges granted to groups are not used for authorization checking of static statements. If the user has sysadm authority, but not explicit privileges to complete the bind, the database manager grants explicit dbadm authority automatically. Required connection: Database. If implicit connect is enabled, a connection to the default database is established. Command syntax: For DB2 for Windows and UNIX
BIND filename
ACTION
BLOCKING
UNAMBIG ALL NO
CLIPKG cli-packages
COLLECTION schema-name
355
BIND
DATETIME
DEGREE
1 degree-of-parallelism ANY
DYNAMICRULES
EXPLAIN
EXPLSNAP
FEDERATED
NO YES FUNCPATH
, schema-name
GENERIC string
GRANT
INSERT
DEF BUF
ISOLATION
CS RR RS UR
MESSAGES message-file
OWNER authorization-id
QUALIFIER qualifier-name
REOPT NONE QUERYOPT optimization-level REOPT ONCE REOPT ALWAYS SQLERROR CHECK CONTINUE NOPACKAGE
SQLWARN
NO YES
STATICREADONLY
ACTION
356
Command Reference
BIND
BLOCKING
UNAMBIG ALL NO
CCSIDG
double-ccsid
CCSIDM
mixed-ccsid
CCSIDS
sbcs-ccsid
CHARSUB
CLIPKG
cli-packages
CNULREQD
NO YES
COLLECTION
schema-name
DBPROTOCOL
DRDA PRIVATE
DEC
15 31
DECDEL
COMMA PERIOD
DYNAMICRULES
ENCODING
GENERIC
string
GRANT
authid PUBLIC
IMMEDWRITE
NO YES PH1
INSERT
BUF DEF
ISOLATION
CS NC RR RS UR
KEEPDYNAMIC
YES NO
MESSAGES
message-file
OPTHINT
hint-id
OS400NAMING
SYSTEM SQL
OWNER
authorization-id
PATH
schema-name
357
BIND
REOPT NONE QUALIFIER qualifier-name RELEASE COMMIT DEALLOCATE SQLERROR REOPT ONCE REOPT ALWAYS CHECK CONTINUE NOPACKAGE
SORTSEQ
JOBRUN HEX
VALIDATE
BIND RUN
STRDEL
APOSTROPHE QUOTE
TEXT
label
Notes: 1 2 3 If the server does not support the DATETIME DEF option, it is mapped to DATETIME ISO. The DEGREE option is only supported by DRDA Level 2 Application Servers. DRDA defines the EXPLAIN option to have the value YES or NO. If the server does not support the EXPLAIN YES option, the value is mapped to EXPLAIN ALL.
Command parameters: filename Specifies the name of the bind file that was generated when the application program was precompiled, or a list file containing the names of several bind files. Bind files have the extension .bnd. The full path name can be specified. If a list file is specified, the @ character must be the first character of the list file name. The list file can contain several lines of bind file names. Bind files listed on the same line must be separated by plus (+) characters, but a + cannot appear in front of the first file listed on each line, or after the last bind file listed. For example,
/u/smith/sqllib/bnd/@all.lst
ACTION Indicates whether the package can be added or replaced. ADD Indicates that the named package does not exist, and that a new package is to be created. If the package already exists, execution stops, and a diagnostic error message is returned.
REPLACE Indicates that the existing package is to be replaced by a new one with the same package name and creator. This is the default value for the ACTION option. RETAIN Indicates whether BIND and EXECUTE authorities are to be preserved when a package is replaced. If ownership of
358
Command Reference
BIND
the package changes, the new owner grants the BIND and EXECUTE authority to the previous package owner. NO Does not preserve BIND and EXECUTE authorities when a package is replaced. This value is not supported by DB2. Preserves BIND and EXECUTE authorities when a package is replaced. This is the default value.
YES
REPLVER version-id Replaces a specific version of a package. The version identifier specifies which version of the package is to be replaced. If the specified version does not exist, an error is returned. If the REPLVER option of REPLACE is not specified, and a package already exists that matches the package name, creator, and version of the package being bound, that package will be replaced; if not, a new package will be added. BLOCKING Specifies the type of row blocking for cursors. ALL Specifies to block for: v Read-only cursors v Cursors not specified as FOR UPDATE OF Ambiguous cursors are treated as read-only. NO Specifies not to block any cursors. Ambiguous cursors are treated as updatable.
UNAMBIG Specifies to block for: v Read-only cursors v Cursors not specified as FOR UPDATE OF Ambiguous cursors are treated as updatable. CCSIDG double-ccsid An integer specifying the coded character set identifier (CCSID) to be used for double byte characters in character column definitions (without a specific CCSID clause) in CREATE and ALTER TABLE SQL statements. This option is not supported by DB2 Database for Linux, UNIX, and Windows. The DRDA server will use a system defined default value if this option is not specified. CCSIDM mixed-ccsid An integer specifying the coded character set identifier (CCSID) to be used for mixed byte characters in character column definitions (without a specific CCSID clause) in CREATE and ALTER TABLE SQL statements. This option is not supported by DB2 Database for Linux, UNIX, and Windows. The DRDA server will use a system defined default value if this option is not specified. CCSIDS sbcs-ccsid An integer specifying the coded character set identifier (CCSID) to be used for single byte characters in character column definitions (without a specific CCSID clause) in CREATE and ALTER TABLE SQL statements.
359
BIND
This option is not supported by DB2 Database for Linux, UNIX, and Windows. The DRDA server will use a system defined default value if this option is not specified. CHARSUB Designates the default character sub-type that is to be used for column definitions in CREATE and ALTER TABLE SQL statements. This DRDA precompile/bind option is not supported by DB2 for Windows and UNIX. BIT Use the FOR BIT DATA SQL character sub-type in all new character columns for which an explicit sub-type is not specified.
DEFAULT Use the target system defined default in all new character columns for which an explicit sub-type is not specified. MIXED Use the FOR MIXED DATA SQL character sub-type in all new character columns for which an explicit sub-type is not specified. SBCS Use the FOR SBCS DATA SQL character sub-type in all new character columns for which an explicit sub-type is not specified. CLIPKG cli-packages An integer between 3 and 30 specifying the number of CLI large packages to be created when binding CLI bind files against a database. CNULREQD This option is related to the LANGLEVEL precompile option, which is not supported by DRDA. It is valid only if the bind file is created from a C or a C++ application. This DRDA bind option is not supported by DB2 for Windows and UNIX. NO The application was coded on the basis of the LANGLEVEL SAA1 precompile option with respect to the null terminator in C string host variables. The application was coded on the basis of the LANGLEVEL MIA precompile option with respect to the null terminator in C string host variables.
YES
COLLECTION schema-name Specifies a 30-character collection identifier for the package. If not specified, the authorization identifier for the user processing the package is used. DATETIME Specifies the date and time format to be used. DEF EUR ISO JIS LOC USA Use a date and time format associated with the territory code of the database. Use the IBM standard for Europe date and time format. Use the date and time format of the International Standards Organization. Use the date and time format of the Japanese Industrial Standard. Use the date and time format in local form associated with the territory code of the database. Use the IBM standard for U.S. date and time format.
360
Command Reference
BIND
DBPROTOCOL Specifies what protocol to use when connecting to a remote site that is identified by a three-part name statement. Supported by DB2 for OS/390 only. For a list of supported option values, refer to the documentation for DB2 for OS/390. DEC Specifies the maximum precision to be used in decimal arithmetic operations. This DRDA precompile/bind option is not supported by DB2 for Windows and UNIX. The DRDA server will use a system defined default value if this option is not specified. 15 31 15-digit precision is used in decimal arithmetic operations. 31-digit precision is used in decimal arithmetic operations.
DECDEL Designates whether a period (.) or a comma (,) will be used as the decimal point indicator in decimal and floating point literals. This DRDA precompile/bind option is not supported by DB2 for Windows and UNIX. The DRDA server will use a system defined default value if this option is not specified. COMMA Use a comma (,) as the decimal point indicator. PERIOD Use a period (.) as the decimal point indicator. DEGREE Specifies the degree of parallelism for the execution of static SQL statements in an SMP system. This option does not affect CREATE INDEX parallelism. 1 The execution of the statement will not use parallelism.
degree-of-parallelism Specifies the degree of parallelism with which the statement can be executed, a value between 2 and 32 767 (inclusive). ANY Specifies that the execution of the statement can involve parallelism using a degree determined by the database manager.
DYNAMICRULES Defines which rules apply to dynamic SQL at run time for the initial setting of the values used for authorization ID and for the implicit qualification of unqualified object references. RUN Specifies that the authorization ID of the user executing the package is to be used for authorization checking of dynamic SQL statements. The authorization ID will also be used as the default package qualifier for implicit qualification of unqualified object references within dynamic SQL statements. This is the default value.
BIND Specifies that all of the rules that apply to static SQL for authorization and qualification are to be used at run time. That is, the authorization ID of the package owner is to be used for authorization checking of dynamic SQL statements, and the default package qualifier is to be used for implicit qualification of unqualified object references within dynamic SQL statements. DEFINERUN If the package is used within a routine context, the authorization
Chapter 3. CLP Commands
361
BIND
ID of the routine definer is to be used for authorization checking and for implicit qualification of unqualified object references within dynamic SQL statements within the routine. If the package is used as a standalone application, dynamic SQL statements are processed as if the package were bound with DYNAMICRULES RUN. DEFINEBIND If the package is used within a routine context, the authorization ID of the routine definer is to be used for authorization checking and for implicit qualification of unqualified object references within dynamic SQL statements within the routine. If the package is used as a standalone application, dynamic SQL statements are processed as if the package were bound with DYNAMICRULES BIND. INVOKERUN If the package is used within a routine context, the current statement authorization ID in effect when the routine is invoked is to be used for authorization checking of dynamic SQL statements and for implicit qualification of unqualified object references within dynamic SQL statements within that routine. If the package is used as a standalone application, dynamic SQL statements are processed as if the package were bound with DYNAMICRULES RUN. INVOKEBIND If the package is used within a routine context, the current statement authorization ID in effect when the routine is invoked is to be used for authorization checking of dynamic SQL statements and for implicit qualification of unqualified object references within dynamic SQL statements within that routine. If the package is used as a standalone application, dynamic SQL statements are processed as if the package were bound with DYNAMICRULES BIND. Because dynamic SQL statements will be using the authorization ID of the package owner in a package exhibiting bind behavior, the binder of the package should not have any authorities granted to them that the user of the package should not receive. Similarly, when defining a routine that will exhibit define behavior, the definer of the routine should not have any authorities granted to them that the user of the package should not receive since a dynamic statement will be using the authorization ID of the routines definer. The following dynamically prepared SQL statements cannot be used within a package that was not bound with DYNAMICRULES RUN: GRANT, REVOKE, ALTER, CREATE, DROP, COMMENT ON, RENAME, SET INTEGRITY, and SET EVENT MONITOR STATE. ENCODING Specifies the encoding for all host variables in static statements in the plan or package. Supported by DB2 for OS/390 only. For a list of supported option values, refer to the documentation for DB2 for OS/390.
362
Command Reference
BIND
EXPLAIN Stores information in the Explain tables about the access plans chosen for each SQL statement in the package. DRDA does not support the ALL value for this option. NO YES Explain information will not be captured. Explain tables will be populated with information about the chosen access plan at prep/bind time for static statements and at run time for incremental bind statements. If the package is to be used for a routine and the package contains incremental bind statements, then the routine must be defined as MODIFIES SQL DATA. If this is not done, incremental bind statements in the package will cause a run time error (SQLSTATE 42985). REOPT Explain information for each reoptimizable incremental bind SQL statement is placed in the explain tables at run time. In addition, explain information is gathered for reoptimizable dynamic SQL statements at run time, even if the CURRENT EXPLAIN MODE register is set to NO. If the package is to be used for a routine, the routine must be defined as MODIFIES SQL DATA, otherwise incremental bind and dynamic statements in the package will cause a run time error (SQLSTATE 42985). ALL Explain information for each eligible static SQL statement will be placed in the Explain tables at prep/bind time. Explain information for each eligible incremental bind SQL statement will be placed in the Explain tables at run time. In addition, Explain information will be gathered for eligible dynamic SQL statements at run time, even if the CURRENT EXPLAIN MODE register is set to NO. If the package is to be used for a routine, the routine must be defined as MODIFIES SQL DATA, otherwise incremental bind and dynamic statements in the package will cause a run time error (SQLSTATE 42985). This value for EXPLAIN is not supported by DRDA. EXPLSNAP Stores Explain Snapshot information in the Explain tables. This DB2 precompile/bind option is not supported by DRDA. NO YES An Explain Snapshot will not be captured. An Explain Snapshot for each eligible static SQL statement will be placed in the Explain tables at prep/bind time for static statements and at run time for incremental bind statements. If the package is to be used for a routine and the package contains incremental bind statements, then the routine must be defined as MODIFIES SQL DATA or incremental bind statements in the package will cause a run time error (SQLSTATE 42985). REOPT Explain snapshot information for each reoptimizable incremental bind SQL statement is placed in the explain tables at run time. In addition, explain snapshot information is gathered for
363
BIND
reoptimizable dynamic SQL statements at run time, even if the CURRENT EXPLAIN SNAPSHOT register is set to NO. If the package is to be used for a routine, the routine must be defined as MODIFIES SQL DATA, otherwise incremental bind and dynamic statements in the package will cause a run time error (SQLSTATE 42985). ALL An Explain Snapshot for each eligible static SQL statement will be placed in the Explain tables at prep/bind time. Explain snapshot information for each eligible incremental bind SQL statement will be placed in the Explain tables at run time. In addition, explain snapshot information will be gathered for eligible dynamic SQL statements at run time, even if the CURRENT EXPLAIN SNAPSHOT register is set to NO. If the package is to be used for a routine, then the routine must be defined as MODIFIES SQL DATA, otherwise incremental bind and dynamic statements in the package will cause a run time error (SQLSTATE 42985). FEDERATED Specifies whether a static SQL statement in a package references a nickname or a federated view. If this option is not specified and a static SQL statement in the package references a nickname or a federated view, a warning is returned and the package is created. This option is not supported for DRDA. NO A nickname or federated view is not referenced in the static SQL statements of the package. If a nickname or federated view is encountered in a static SQL statement during the prepare or bind phase of this package, an error is returned and the package is not created. A nickname or federated view can be referenced in the static SQL statements of the package. If no nicknames or federated views are encountered in static SQL statements during the prepare or bind of the package, no errors or warnings are returned and the package is created.
YES
FUNCPATH Specifies the function path to be used in resolving user-defined distinct types and functions in static SQL. If this option is not specified, the default function path is SYSIBM,SYSFUN,USER where USER is the value of the USER special register. schema-name An SQL identifier, either ordinary or delimited, which identifies a schema that exists at the application server. No validation that the schema exists is made at precompile or at bind time. The same schema cannot appear more than once in the function path. The number of schemas that can be specified is limited by the length of the resulting function path, which cannot exceed 254 bytes. The schema SYSIBM does not need to be explicitly specified; it is implicitly assumed to be the first schema if it is not included in the function path. GENERIC string Supports the binding of new options that are defined in the target database, but are not supported by DRDA. Do not use this option to pass
364
Command Reference
BIND
bind options that are defined in BIND or PRECOMPILE. This option can substantially improve dynamic SQL performance. The syntax is as follows:
generic "option1 value1 option2 value2 ..."
Each option and value must be separated by one or more blank spaces. For example, if the target DRDA database is DB2 Universal Database, Version 8, one could use:
generic "explsnap all queryopt 3 federated yes"
to bind each of the EXPLSNAP, QUERYOPT, and FEDERATED options. The maximum length of the string is 1023 bytes. GRANT authid Grants EXECUTE and BIND privileges to a specified user name or group ID. PUBLIC Grants EXECUTE and BIND privileges to PUBLIC. GRANT_GROUP group-name Grants EXECUTE and BIND privileges to a specified group ID. GRANT_USER user-name Grants EXECUTE and BIND privileges to a specified user name. If more than one of the GRANT, GRANT_GROUP, and GRANT_USER options are specified, only the last option specified is executed. INSERT Allows a program being precompiled or bound against a DB2 Enterprise Server Edition server to request that data inserts be buffered to increase performance. BUF DEF Specifies that inserts from an application should be buffered. Specifies that inserts from an application should not be buffered.
ISOLATION Determines how far a program bound to this package can be isolated from the effect of other executing programs. CS NC Specifies Cursor Stability as the isolation level. No Commit. Specifies that commitment control is not to be used. This isolation level is not supported by DB2 for Windows and UNIX. Specifies Repeatable Read as the isolation level. Specifies Read Stability as the isolation level. Read Stability ensures that the execution of SQL statements in the package is isolated from other application processes for rows read and changed by the application. Specifies Uncommitted Read as the isolation level.
RR RS
UR
IMMEDWRITE Indicates whether immediate writes will be done for updates made to group buffer pool dependent pagesets or database partitions. Supported by DB2 for OS/390 only. For a list of supported option values, refer to the documentation for DB2 for OS/390.
Chapter 3. CLP Commands
365
BIND
KEEPDYNAMIC Specifies whether dynamic SQL statements are to be kept after commit points. Supported by DB2 for OS/390 only. For a list of supported option values, refer to the documentation for DB2 for OS/390. MESSAGES message-file Specifies the destination for warning, error, and completion status messages. A message file is created whether the bind is successful or not. If a message file name is not specified, the messages are written to standard output. If the complete path to the file is not specified, the current directory is used. If the name of an existing file is specified, the contents of the file are overwritten. OPTHINT Controls whether query optimization hints are used for static SQL. Supported by DB2 for OS/390 only. For a list of supported option values, refer to the documentation for DB2 for OS/390. OS400NAMING Specifies which naming option is to be used when accessing DB2 UDB for iSeries data. Supported by DB2 UDB for iSeries only. For a list of supported option values, refer to the documentation for DB2 for iSeries. Because of the slashes used as separators, a DB2 utility can still report a syntax error at execution time on certain SQL statements which use the iSeries system naming convention, even though the utility might have been precompiled or bound with the OS400NAMING SYSTEM option. For example, the Command Line Processor will report a syntax error on an SQL CALL statement if the iSeries system naming convention is used, whether or not it has been precompiled or bound using the OS400NAMING SYSTEM option. OWNER authorization-id Designates a 30-character authorization identifier for the package owner. The owner must have the privileges required to execute the SQL statements contained in the package. Only a user with SYSADM or DBADM authority can specify an authorization identifier other than the user ID. The default value is the primary authorization ID of the precompile/bind process. SYSIBM, SYSCAT, and SYSSTAT are not valid values for this option. PATH Specifies the function path to be used in resolving user-defined distinct types and functions in static SQL. If this option is not specified, the default function path is SYSIBM,SYSFUN,USER where USER is the value of the USER special register. schema-name An SQL identifier, either ordinary or delimited, which identifies a schema that exists at the application server. No validation that the schema exists is made at precompile or at bind time. QUALIFIER qualifier-name Provides a 30-character implicit qualifier for unqualified objects contained in the package. The default is the owners authorization ID, whether or not owner is explicitly specified. QUERYOPT optimization-level Indicates the desired level of optimization for all static SQL statements contained in the package. The default value is 5. The SET CURRENT
366
Command Reference
BIND
QUERY OPTIMIZATION statement describes the complete range of optimization levels available. This DB2 precompile/bind option is not supported by DRDA. RELEASE Indicates whether resources are released at each COMMIT point, or when the application terminates. This DRDA precompile/bind option is not supported by DB2 for Windows and UNIX. COMMIT Release resources at each COMMIT point. Used for dynamic SQL statements. DEALLOCATE Release resources only when the application terminates. SORTSEQ Specifies which sort sequence table to use on the iSeries system. Supported by DB2 UDB for iSeries only. For a list of supported option values, refer to the documentation for DB2 for iSeries. SQLERROR Indicates whether to create a package or a bind file if an error is encountered. CHECK Specifies that the target system performs all syntax and semantic checks on the SQL statements being bound. A package will not be created as part of this process. If, while binding, an existing package with the same name and version is encountered, the existing package is neither dropped nor replaced even if action replace was specified. CONTINUE Creates a package, even if errors occur when binding SQL statements. Those statements that failed to bind for authorization or existence reasons can be incrementally bound at execution time if VALIDATE RUN is also specified. Any attempt to execute them at run time generates an error (SQLCODE -525, SQLSTATE 51015). NOPACKAGE A package or a bind file is not created if an error is encountered. REOPT Specifies whether to have DB2 determine an access path at run time using values for host variables, parameter markers, and special registers. Valid values are: NONE The access path for a given SQL statement containing host variables, parameter markers, or special registers will not be optimized using real values. The default estimates for the these variables is used, and the plan is cached and will be used subsequently. This is the default value. ONCE The access path for a given SQL statement will be optimized using the real values of the host variables, parameter markers, or special registers when the query is first executed. This plan is cached and used subsequently. ALWAYS The access path for a given SQL statement will always be compiled
Chapter 3. CLP Commands
367
BIND
and reoptimized using the values of the host variables, parameter markers, or special registers that are known each time the query is executed. REOPT / NOREOPT VARS These options have been replaced by REOPT ALWAYS and REOPT NONE; however, they are still supported for back-level compatibility. Specifies whether to have DB2 determine an access path at run time using values for host variables, parameter markers, and special registers. Supported by DB2 for OS/390 only. For a list of supported option values, refer to the documentation for DB2 for OS/390. SQLWARN Indicates whether warnings will be returned from the compilation of dynamic SQL statements (via PREPARE or EXECUTE IMMEDIATE), or from describe processing (via PREPARE...INTO or DESCRIBE). NO YES Warnings will not be returned from the SQL compiler. Warnings will be returned from the SQL compiler.
SQLCODE +236, +237 and +238 are exceptions. They are returned regardless of the sqlwarn option value. STATICREADONLY Determines whether static cursors will be treated as being READ ONLY. This DB2 precompile/bind option is not supported by DRDA. NO All static cursors will take on the attributes as would normally be generated given the statement text and the setting of the LANGLEVEL precompile option. This is the default value. Any static cursor that does not contain the FOR UPDATE or FOR READ ONLY clause will be considered READ ONLY.
YES
STRDEL Designates whether an apostrophe () or double quotation marks (") will be used as the string delimiter within SQL statements. This DRDA precompile/bind option is not supported by DB2 for Windows and UNIX. The DRDA server will use a system defined default value if this option is not specified. APOSTROPHE Use an apostrophe () as the string delimiter. QUOTE Use double quotation marks (") as the string delimiter. TEXT label The description of a package. Maximum length is 255 characters. The default value is blanks. This DRDA precompile/bind option is not supported by DB2 for Windows and UNIX. TRANSFORM GROUP Specifies the transform group name to be used by static SQL statements for exchanging user-defined structured type values with host programs. This transform group is not used for dynamic SQL statements or for the exchange of parameters and results with external functions or methods. This option is not supported by DRDA. groupname An SQL identifier of up to 18 characters in length. A group name
368
Command Reference
BIND
cannot include a qualifier prefix and cannot begin with the prefix SYS since this is reserved for database use. In a static SQL statement that interacts with host variables, the name of the transform group to be used for exchanging values of a structured type is as follows: v The group name in the TRANSFORM GROUP bind option, if any v The group name in the TRANSFORM GROUP prep option as specified at the original precompilation time, if any v The DB2_PROGRAM group, if a transform exists for the given type whose group name is DB2_PROGRAM v No transform group is used if none of the above conditions exist. The following errors are possible during the bind of a static SQL statement: v SQLCODE yyyyy, SQLSTATE xxxxx: A transform is needed, but no static transform group has been selected. v SQLCODE yyyyy, SQLSTATE xxxxx: The selected transform group does not include a necessary transform (TO SQL for input variables, FROM SQL for output variables) for the data type that needs to be exchanged. v SQLCODE yyyyy, SQLSTATE xxxxx: The result type of the FROM SQL transform is not compatible with the type of the output variable, or the parameter type of the TO SQL transform is not compatible with the type of the input variable. In these error messages, yyyyy is replaced by the SQL error code, and xxxxx by the SQL state code. VALIDATE Determines when the database manager checks for authorization errors and object not found errors. The package owner authorization ID is used for validity checking. BIND Validation is performed at precompile/bind time. If all objects do not exist, or all authority is not held, error messages are produced. If sqlerror continue is specified, a package/bind file is produced despite the error message, but the statements in error are not executable. RUN Validation is attempted at bind time. If all objects exist, and all authority is held, no further checking is performed at execution time. If all objects do not exist, or all authority is not held at precompile/bind time, warning messages are produced, and the package is successfully bound, regardless of the sqlerror continue option setting. However, authority checking and existence checking for SQL statements that failed these checks during the precompile/bind process can be redone at execution time. Examples: The following example binds myapp.bnd (the bind file generated when the myapp.sqc program was precompiled) to the database to which a connection has been established:
Chapter 3. CLP Commands
369
BIND
db2 bind myapp.bnd
Any messages resulting from the bind process are sent to standard output. Usage notes: Binding a package using the REOPT option with the ONCE or ALWAYS value specified might change the static and dynamic statement compilation and performance. Binding can be done as part of the precompile process for an application program source file, or as a separate step at a later time. Use BIND when binding is performed as a separate process. The name used to create the package is stored in the bind file, and is based on the source file name from which it was generated (existing paths or extensions are discarded). For example, a precompiled source file called myapp.sql generates a default bind file called myapp.bnd and a default package name of MYAPP. However, the bind file name and the package name can be overridden at precompile time by using the bindfile and the package options. Binding a package with a schema name that does not already exist results in the implicit creation of that schema. The schema owner is SYSIBM. The CREATEIN privilege on the schema is granted to PUBLIC. BIND executes under the transaction that was started. After performing the bind, BIND issues a COMMIT or a ROLLBACK to terminate the current transaction and start another one. Binding stops if a fatal error or more than 100 errors occur. If a fatal error occurs, the utility stops binding, attempts to close all files, and discards the package. When a package exhibits bind behavior, the following will be true: 1. The implicit or explicit value of the BIND option OWNER will be used for authorization checking of dynamic SQL statements. 2. The implicit or explicit value of the BIND option QUALIFIER will be used as the implicit qualifier for qualification of unqualified objects within dynamic SQL statements. 3. The value of the special register CURRENT SCHEMA has no effect on qualification. In the event that multiple packages are referenced during a single connection, all dynamic SQL statements prepared by those packages will exhibit the behavior as specified by the DYNAMICRULES option for that specific package and the environment they are used in. Parameters displayed in the SQL0020W message are correctly noted as errors, and will be ignored as indicated by the message. If an SQL statement is found to be in error and the BIND option SQLERROR CONTINUE was specified, the statement will be marked as invalid. In order to change the state of the SQL statement, another BIND must be issued . Implicit and explicit rebind will not change the state of an invalid statement. In a package bound with VALIDATE RUN, a statement can change from static to incremental
370
Command Reference
BIND
bind or incremental bind to static across implicit and explicit rebinds depending on whether or not object existence or authority problems exist during the rebind. Related concepts: v Isolation levels in SQL Reference, Volume 1 v Authorization considerations for dynamic SQL in Developing SQL and External Routines v Effect of DYNAMICRULES bind option on dynamic SQL in Developing Embedded SQL Applications v Performance improvements when using REOPT option of the BIND command in Developing Embedded SQL Applications Related reference: v PRECOMPILE on page 583 v SET CURRENT QUERY OPTIMIZATION statement in SQL Reference, Volume 2 v Datetime values in SQL Reference, Volume 1 v Special registers in SQL Reference, Volume 1 v DB2 CLI bind files and package names in Call Level Interface Guide and Reference, Volume 1 v sqlabndx API - Bind application program to create a package in Administrative API Reference
371
CATALOG DATABASE
CATALOG DATABASE
Stores database location information in the system database directory. The database can be located either on the local workstation or on a remote node. Scope: In a partitioned database environment, when cataloging a local database into the system database directory, this command must be issued from a database partition on the server where the database resides. Authorization: One of the following: v sysadm v sysctrl Required connection: None. Directory operations affect the local directory only. Command syntax:
CATALOG DATABASE DB database-name AS alias ON path drive AT NODE nodename
AUTHENTICATION
principalname
WITH comment-string
Command parameters: DATABASE database-name Specifies the name of the database to catalog. AS alias Specifies an alias as an alternate name for the database being cataloged. If an alias is not specified, the database manager uses database-name as the alias. ON path/drive Specifies the path on which the database being cataloged resides. On Windows operating systems, may instead specify the letter of the drive on which the database being cataloged resides (if it was created on a drive, not on a specific path). AT NODE nodename Specifies the name of the node where the database being cataloged resides. This name should match the name of an entry in the node directory. If the
372
Command Reference
CATALOG DATABASE
node name specified does not exist in the node directory, a warning is returned, but the database is cataloged in the system database directory. The node name should be cataloged in the node directory if a connection to the cataloged database is desired. AUTHENTICATION The authentication value is stored for remote databases (it appears in the output from the LIST DATABASE DIRECTORY command) but it is not stored for local databases. Specifying an authentication type can result in a performance benefit. SERVER Specifies that authentication takes place on the node containing the target database. CLIENT Specifies that authentication takes place on the node where the application is invoked. SERVER_ENCRYPT Specifies that authentication takes place on the node containing the target database, and that passwords are encrypted at the source. Passwords are decrypted at the target, as specified by the authentication type cataloged at the source. KERBEROS Specifies that authentication takes place using Kerberos Security Mechanism. When authentication is Kerberos, only SECURITY=NONE is supported. TARGET PRINCIPAL principalname Fully qualified Kerberos principal name for the target server; that is, the fully qualified Kerberos principal of the DB2 instance owner in the form of name/instance@REALM. For Windows 2000, Windows XP, and Windows Server 2003, this is the logon account of the DB2 server service in the form of userid@DOMAIN, [email protected] or domain\userid. DATA_ENCRYPT Specifies that authentication takes place on the node containing the target database, and that connections must use data encryption. GSSPLUGIN Specifies that authentication takes place using an external GSS API-based plug-in security mechanism. When authentication is GSSPLUGIN, only SECURITY=NONE is supported. WITH comment-string Describes the database or the database entry in the system database directory. The maximum length of a comment string is 30 characters. A carriage return or a line feed character is not permitted. The comment text must be enclosed by double quotation marks. Examples:
db2 catalog database sample on /databases/sample with "Sample Database"
Usage notes:
Chapter 3. CLP Commands
373
CATALOG DATABASE
Use CATALOG DATABASE to catalog databases located on local or remote nodes, recatalog databases that were uncataloged previously, or maintain multiple aliases for one database (regardless of database location). DB2 automatically catalogs databases when they are created. It catalogs an entry for the database in the local database directory and another entry in the system database directory. If the database is created from a remote client (or a client which is executing from a different instance on the same machine), an entry is also made in the system database directory at the client instance. If neither path nor node name is specified, the database is assumed to be local, and the location of the database is assumed to be that specified in the database manager configuration parameter dftdbpath. Databases on the same node as the database manager instance are cataloged as indirect entries. Databases on other nodes are cataloged as remote entries. CATALOG DATABASE automatically creates a system database directory if one does not exist. The system database directory is stored on the path that contains the database manager instance that is being used, and is maintained outside of the database. List the contents of the system database directory using the LIST DATABASE DIRECTORY command. To list the contents of the local database directory use the LIST DATABASE DIRECTORY ON /PATH, where PATH is where the database was created. If directory caching is enabled, database and node directory files are cached in memory. An applications directory cache is created during its first directory lookup. Since the cache is only refreshed when the application modifies any of the directory files, directory changes made by other applications might not be effective until the application has restarted. To refresh the CLPs directory cache, use the TERMINATE command. To refresh DB2s shared cache, stop (db2stop) and then restart (db2start) the database manager. To refresh the directory cache for another application, stop and then restart that application. Related tasks: v Cataloging a database in Administration Guide: Implementation Related reference: v sqlecadb API - Catalog a database in the system database directory in Administrative API Reference v v v v GET DATABASE MANAGER CONFIGURATION on page 463 LIST DATABASE DIRECTORY on page 523 TERMINATE on page 744 UNCATALOG DATABASE on page 745
374
Command Reference
AR library-name
PARMS parameter-string
WITH comment-string
Command parameters: DATABASE database-name Specifies the alias of the target database to catalog. This name should match the name of an entry in the database directory that is associated with the remote node. AS target-database-name Specifies the name of the target host or iSeries database to catalog. AR library-name Specifies the name of the Application Requester library that is loaded and used to access a remote database listed in the DCS directory. If using the DB2 Connect AR, do not specify a library name. The default value will cause DB2 Connect to be invoked. If not using DB2 Connect, specify the library name of the AR, and place that library on the same path as the database manager libraries. On Windows operating systems, the path is drive:\sqllib\bin. On UNIX based systems, the path is $HOME/sqllib/lib of the instance owner. PARMS parameter-string Specifies a parameter string that is to be passed to the AR when it is invoked. The parameter string must be enclosed by double quotation marks. For more information on the parameter string, please refer to the DCS directory values topic through the Related concepts section.
375
Usage notes: The DB2 Connect program provides connections to DRDA Application Servers such as: v DB2 for OS/390 or z/OS databases on System/370 and System/390 architecture host computers. v DB2 for VM and VSE databases on System/370 and System/390 architecture host computers. v iSeries databases on Application System/400 (AS/400) and iSeries computers. The database manager creates a Database Connection Services directory if one does not exist. This directory is stored on the path that contains the database manager instance that is being used. The DCS directory is maintained outside of the database. The database must also be cataloged as a remote database in the system database directory. List the contents of the DCS directory using the LIST DCS DIRECTORY command. If directory caching is enabled, database, node, and DCS directory files are cached in memory. An applications directory cache is created during its first directory lookup. Since the cache is only refreshed when the application modifies any of the directory files, directory changes made by other applications might not be effective until the application has restarted. To refresh the CLPs directory cache, use the TERMINATE command. To refresh DB2s shared cache, stop (db2stop) and then restart (db2start) the database manager. To refresh the directory cache for another application, stop and then restart that application. Related concepts: v DCS directory values in DB2 Connect Users Guide Related reference: v sqlegdad API - Catalog a database in the database connection services (DCS) directory in Administrative API Reference v v v v GET DATABASE MANAGER CONFIGURATION on page 463 TERMINATE on page 744 UNCATALOG DCS DATABASE on page 747 LIST DCS DIRECTORY on page 531
376
Command Reference
AT NODE
nodename
GWNODE gateway-node
PARMS parameter-string
AR library-name
AUTHENTICATION
SERVER CLIENT SERVER_ENCRYPT DCS_ENCRYPT DCS KERBEROS TARGET PRINCIPAL DATA_ENCRYPT GSSPLUGIN
principalname
WITH comments
Command parameters: DATABASE database-name Specifies the name of the database to catalog. AS alias Specifies an alias as an alternate name for the database being cataloged. If an alias is not specified, the database name is used as the alias. AT NODE nodename Specifies the LDAP node name for the database server on which the database resides. This parameter must be specified when registering a database on a remote server. GWNODE gateway-node Specifies the LDAP node name for the gateway server. PARMS parameter-string Specifies a parameter string that is passed to the Application Requester (AR) when accessing DCS databases. The change password sym_dest_name
377
KERBEROS Specifies that authentication takes place using Kerberos Security Mechanism. When authentication is Kerberos, only SECURITY=NONE is supported. TARGET PRINCIPAL principalname Fully qualified Kerberos principal name for the target server; that is, the logon account of the DB2 server service in the form of [email protected] or domain\userid. DATA_ENCRYPT Specifies that authentication takes place on the node containing the target database, and that connections must use data encryption. GSSPLUGIN Specifies that authentication takes place using an external GSS API-based plug-in security mechanism. When authentication is GSSPLUGIN, only SECURITY=NONE is supported.
378
Command Reference
379
380
Command Reference
Command parameters: NODE nodename Specifies the LDAP node name of the DB2 server. AS nodealias Specifies a new alias name for the LDAP node entry. USER username Specifies the users LDAP distinguished name (DN). The LDAP user DN must have sufficient authority to create the object in the LDAP directory. If the users LDAP DN is not specified, the credentials of the current logon user will be used. PASSWORD password Account password. Usage notes: The CATALOG LDAP NODE command is used to specify a different alias name for the node that represents the DB2 server. Related concepts: v Lightweight Directory Access Protocol (LDAP) overview in Administration Guide: Implementation Related tasks: v Registration of DB2 servers after installation in Administration Guide: Implementation Related reference: v db2LdapCatalogNode API - Provide an alias for node name in LDAP server in Administrative API Reference v CATALOG LDAP DATABASE on page 377 v UNCATALOG LDAP DATABASE on page 749 v UNCATALOG LDAP NODE on page 751
Chapter 3. CLP Commands
381
SYSTEM system-name
OSTYPE operating-system-type
WITH comment-string
Command parameters: ADMIN Specifies that a local administration server node is to be cataloged. NODE nodename A local alias for the node to be cataloged. This is an arbitrary name on the users workstation, used to identify the node. It should be a meaningful name to make it easier to remember. The name must conform to database manager naming conventions. INSTANCE instancename Name of the local instance to be accessed. SYSTEM system-name Specifies the DB2 system name that is used to identify the server machine. OSTYPE operating-system-type Specifies the operating system type of the server machine. Valid values are: AIX, WIN, HPUX, SUN, OS390, OS400, VM, VSE, SNI, SCO, LINUX and DYNIX. WITH comment-string Describes the node entry in the node directory. Any comment that helps to describe the node can be entered. Maximum length is 30 characters. A carriage return or a line feed character is not permitted. The comment text must be enclosed by single or double quotation marks. Examples:
382
Command Reference
Usage notes: If directory caching is enabled, database, node, and DCS directory files are cached in memory. An applications directory cache is created during its first directory lookup. Since the cache is only refreshed when the application modifies any of the directory files, directory changes made by other applications might not be effective until the application has restarted. To refresh the CLPs directory cache, use TERMINATE. To refresh DB2s shared cache, stop (db2stop) and then restart (db2start) the database manager. To refresh the directory cache for another application, stop and then restart that application. Related reference: v sqlectnd API - Catalog an entry in the node directory in Administrative API Reference v GET DATABASE MANAGER CONFIGURATION on page 463 v TERMINATE on page 744
383
OSTYPE operating-system-type
WITH comment-string
Command parameters: ADMIN Specifies that an NPIPE administration server node is to be cataloged. NODE nodename A local alias for the node to be cataloged. This is an arbitrary name on the users workstation, used to identify the node. It should be a meaningful name to make it easier to remember. The name must conform to database manager naming conventions. REMOTE computername The computer name of the node on which the target database resides. Maximum length is 15 characters. INSTANCE instancename Name of the server instance on which the target database resides. Identical to the name of the remote named pipe, which is used to communicate with the remote node. SYSTEM system-name Specifies the DB2 system name that is used to identify the server machine. OSTYPE operating-system-type Specifies the operating system type of the server machine. Valid values are: AIX, WIN, HPUX, SUN, OS390, OS400, VM, VSE, SNI, SCO, and LINUX. WITH comment-string Describes the node entry in the node directory. Any comment that helps to describe the node can be entered. Maximum length is 30 characters. A
384
Command Reference
Usage notes: The database manager creates the node directory when the first node is cataloged (that is, when the first CATALOG...NODE command is issued). On a Windows client, it stores and maintains the node directory in the instance subdirectory where the client is installed. On an AIX client, it creates the node directory in the DB2 installation directory. List the contents of the local node directory using the LIST NODE DIRECTORY command. If directory caching is enabled (see the configuration parameter dir_cache in the GET DATABASE MANAGER CONFIGURATION command), database, node, and DCS directory files are cached in memory. An applications directory cache is created during its first directory lookup. Since the cache is only refreshed when the application modifies any of the directory files, directory changes made by other applications might not be effective until the application has restarted. To refresh the CLPs directory cache, use the TERMINATE command. To refresh DB2s shared cache, stop (db2stop) and then restart (db2start) the database manager. To refresh the directory cache for another application, stop and then restart that application. Related reference: v sqlectnd API - Catalog an entry in the node directory in Administrative API Reference v GET DATABASE MANAGER CONFIGURATION on page 463 v LIST NODE DIRECTORY on page 541 v TERMINATE on page 744
385
Command parameters: USER Catalog a user data source. This is the default if no keyword is specified. SYSTEM Catalog a system data source. ODBC DATA SOURCE data-source-name Specifies the name of the data source to be cataloged. Maximum length is 32 characters. Related reference: v LIST ODBC DATA SOURCES on page 544 v UNCATALOG ODBC DATA SOURCE on page 753
386
Command Reference
SERVER
SECURITY SOCKS
REMOTE_INSTANCE instance-name
SYSTEM system-name
OSTYPE operating-system-type
WITH comment-string
Command parameters: ADMIN Specifies that a TCP/IP administration server node is to be cataloged. This parameter cannot be specified if the SECURITY SOCKS parameter is specified. NODE nodename A local alias for the node to be cataloged. This is an arbitrary name on the users workstation, used to identify the node. It should be a meaningful name to make it easier to remember. The name must conform to database manager naming conventions. REMOTE hostname/IPv4 address/IPv6 address The hostname or the IP address of the node where the target database resides. IP address can be an IPv4 or IPv6 address. The hostname is the name of the node that is known to the TCP/IP network. The maximum length of the hostname is 255 characters. SERVER service-name / port number Specifies the service name or the port number of the server database manager instance. The maximum length is 14 characters. This parameter is case sensitive. If a service name is specified, the services file on the client is used to map the service name to a port number. A service name is specified in the servers database manager configuration file, and the services file on the
Chapter 3. CLP Commands
387
388
Command Reference
To specify an IPv4 address using the CATALOG TCPIP4 NODE command, issue:
db2 catalog tcpip4 node db2tcp2 remote 192.0.32.67 server db2inst1 with "Look up IPv4 address from 192.0.32.67"
This example specifies an IPv4 address. You should not specify an IPv6 address in the CATALOG TCPIP4 NODE command. The catalog will not fail if you do, but a subsequent attach or connect will fail because an invalid address was specified during cataloging. To specify an IPv6 address using the CATALOG TCPIP6 NODE command, issue:
db2 catalog tcpip6 node db2tcp3 1080:0:0:0:8:800:200C:417A server 50000 with "Look up IPv6 address from 1080:0:0:0:8:800:200C:417A"
This example specifies an IPv6 address and a port number for server. You should not specify an IPv6 address in the CATALOG TCPIP4 NODE command. The catalog will not fail if you do, but a subsequent attach or connect will fail because an invalid address was specified during cataloging. Usage notes: The database manager creates the node directory when the first node is cataloged (that is, when the first CATALOG...NODE command is issued). On a Windows client, it stores and maintains the node directory in the instance subdirectory where the client is installed. On an AIX client, it creates the node directory in the DB2 installation directory. List the contents of the local node directory using the LIST NODE DIRECTORY command. If directory caching is enabled, database, node, and DCS directory files are cached in memory. An applications directory cache is created during its first directory lookup. Since the cache is only refreshed when the application modifies any of the directory files, directory changes made by other applications might not be effective until the application has restarted. To refresh the CLPs directory cache, use the TERMINATE command. To refresh DB2s shared cache, stop (db2stop) and then restart (db2start) the database manager. To refresh the directory cache for another application, stop and then restart that application. Related reference: v sqlectnd API - Catalog an entry in the node directory in Administrative API Reference v GET DATABASE MANAGER CONFIGURATION on page 463 v LIST NODE DIRECTORY on page 541 v TERMINATE on page 744
389
WITH comment-string
Command parameters: DATABASE database-alias Specifies the alias of the database whose comment is to be changed. To change the comment in the system database directory, specify the alias for the database. To change the comment in the local database directory, specify the path where the database resides (with the path parameter), and enter the name (not the alias) of the database. ON path/drive Specifies the path on which the database resides, and changes the comment in the local database directory. If a path is not specified, the database comment for the entry in the system database directory is changed. On Windows operating systems, may instead specify the letter of the drive on which the database resides (if it was created on a drive, not on a specific path). WITH comment-string Describes the entry in the system database directory or the local database directory. Any comment that helps to describe the cataloged database can be entered. The maximum length of a comment string is 30 characters. A carriage return or a line feed character is not permitted. The comment text must be enclosed by double quotation marks. Examples: The following example changes the text in the system database directory comment for the SAMPLE database from Test 2 - Holding to Test 2 - Add employee inf rows:
390
Command Reference
Usage notes: New comment text replaces existing text. To append information, enter the old comment text, followed by the new text. Only the comment for an entry associated with the database alias is modified. Other entries with the same database name, but with different aliases, are not affected. If the path is specified, the database alias must be cataloged in the local database directory. If the path is not specified, the database alias must be cataloged in the system database directory. Related reference: v sqledcgd API - Change a database comment in the system or local database directory in Administrative API Reference v CREATE DATABASE on page 395
391
CHANGE
SQLISL ISOLATION
TO
Command parameters: TO CS NC RR RS UR Usage notes: DB2 uses isolation levels to maintain data integrity in a database. The isolation level defines the degree to which an application process is isolated (shielded) from changes made by other concurrently executing application processes. If a selected isolation level is not supported by a database, it is automatically escalated to a supported level at connect time. Isolation level changes are not permitted while connected to a database with a type 1 connection. The back end process must be terminated before isolation level can be changed:
db2 terminate db2 change isolation to ur db2 connect to sample
Specifies cursor stability as the isolation level. Specifies no commit as the isolation level. Not supported by DB2. Specifies repeatable read as the isolation level. Specifies read stability as the isolation level. Specifies uncommitted read as the isolation level.
Changes are permitted using a type 2 connection, but should be made with caution, because the changes will apply to every connection made from the same command line processor back-end process. The user assumes responsibility for remembering which isolation level applies to which connected database. In the following example, a user is in DB2 interactive mode following creation of the SAMPLE database:
392
Command Reference
An SQL0514N error occurs because c1 is not in a prepared state for this isolation level.
change isolation to cs set connection sample2 fetch c1 for 2 rows
An SQL0514N error occurs because c1 is not in a prepared state for this database.
declare c1 cursor for select division from org
A DB21029E error occurs because cursor c1 has already been declared and opened.
set connection sample fetch c1 for 2 rows
This works because the original database (SAMPLE) was used with the original isolation level (CS). Related concepts: v Isolation levels in SQL Reference, Volume 1 Related reference: v SET CLIENT on page 716 v QUERY CLIENT on page 611 v Change Isolation Level in Administrative API Reference
393
COMPLETE XMLSCHEMA
COMPLETE XMLSCHEMA
This command completes the process of registering an XML schema in the XML schema repository (XSR). Authorization: v The user ID must be the owner of the XSR object as recorded in the catalog view SYSCAT.XSROBJECTS. Required connection: Database Command syntax:
COMPLETE XMLSCHEMA relational-identifier WITH schema-properties-URI
ENABLE DECOMPOSITION
Description: relational-identifier Specifies the relational name of an XML schema previously registered with the REGISTER XMLSCHEMA command. The relational name can be specified as a two-part SQL identifier, consisting of the SQL schema and the XML schema name, having the following format: SQLschema.name. The default SQL schema, as defined in the CURRENT SCHEMA special register, is used if no schema is specified. WITH schema-properties-URI Specifies the uniform resource identifier (URI) of a properties document for the XML schema. Only a local file, specified by a file scheme URI, is supported. A schema property document can only be specified during the completion stage of XML schema registration. ENABLE DECOMPOSITION Indicates that the schema can be used for decomposing XML instance documents. Example:
COMPLETE XMLSCHEMA user1.POschema WITH file:///c:/TEMP/schemaProp.xml
Usage notes: An XML schema cannot be referenced or used for validation or annotation until the XML schema registration process has been completed. This command completes the XML schema registration process for an XML schema that was begun with the REGISTER XMLSCHEMA command. Related reference: v ADD XMLSCHEMA DOCUMENT on page 339 v REGISTER XMLSCHEMA on page 640
394
Command Reference
CREATE DATABASE
CREATE DATABASE
The CREATE DATABASE command initializes a new database with an optional user-defined collating sequence, creates the three initial table spaces, creates the system tables, and allocates the recovery log file. When you initialize a new database, the AUTOCONFIGURE command is issued by default. When the CREATE DATABASE command is issued, the Configuration Advisor also runs automatically. This means that the database configuration parameters are automatically tuned for you according to your system resources. In addition, Automated Runstats is enabled. To disable the Configuration Advisor from running at database creation, please refer to the db2_enable_autoconfig_default registry variable. To disable Automated Runstats, please refer to auto_runstats database configuration parameter. Adaptive Self Tuning Memory is also enabled by default for single partition databases. To disable Adaptive Self Tuning Memory by default, refer to the self_tuning_mem database configuration parameter (see the Related reference section). For multi-partition databases, Adaptive Self Tuning Memory is disabled by default. In order to make use of XML features, the codeset must be set to UTF-8 through the USING CODESET option of this command. In a future release of the DB2 database system, the default code set will be changed to UTF-8 when creating a database. If a particular code set and territory is needed for a database, the desired code set and territory should be specified in the CREATE DATABASE command This command is not valid on a client. Scope: In a partitioned database environment, this command affects all database partitions that are listed in the db2nodes.cfg file. The database partition from which this command is issued becomes the catalog database partition for the new database. Authorization: You must have one of the following: v sysadm v sysctrl Required connection: Instance. To create a database at another (remote) node, you must first attach to that node. A database connection is temporarily established by this command during processing. Command syntax:
CREATE DATABASE DB database-name AT DBPARTITIONNUM Create Database options
395
CREATE DATABASE
Create Database options:
AUTOMATIC STORAGE--YES AUTOMATIC STORAGE--NO ON , path drive
DBPATH ON
path drive
ALIAS database-alias
PAGESIZE 4096 SYSTEM COMPATIBILITY IDENTITY IDENTITY_16BIT UCA400_NO UCA400_LSK UCA400_LTH NLSCHAR PAGESIZE integer K NUMSEGS numsegs
COLLATE USING
DFT_EXTENT_SZ dft_extentsize
RESTRICTIVE
CATALOG TABLESPACE
tblspace-defn
USER TABLESPACE
tblspace-defn
TEMPORARY TABLESPACE
tblspace-defn
WITH comment-string
AUTOCONFIGURE
APPLY
USING
input-keyword param-value
tblspace-defn:
MANAGED BY
, SYSTEM USING ( DATABASE USING ( AUTOMATIC STORAGE EXTENTSIZE number-of-pages PREFETCHSIZE number-of-pages container-string , FILE DEVICE ) number-of-pages )
container-string
OVERHEAD number-of-milliseconds
TRANSFERRATE number-of-milliseconds
396
Command Reference
CREATE DATABASE
AUTORESIZE
NO YES
INITIALSIZE integer
K M G
INCREASESIZE integer
PERCENT K M G
MAXSIZE
NONE integer
K M G
Notes: 1. The combination of the code set and territory values must be valid. 2. Not all collating sequences are valid with every code set and territory combination. 3. The table space definitions specified on CREATE DATABASE apply to all database partitions on which the database is being created. They cannot be specified separately for each database partition. If the table space definitions are to be created differently on particular database partitions, the CREATE TABLESPACE statement must be used. When defining containers for table spaces, $N can be used. $N will be replaced by the database partition number when the container is actually created. This is required if the user wants to specify containers in a multiple logical partition database. 4. The AUTOCONFIGURE option requires sysadm authority. Command parameters: DATABASE database-name A name to be assigned to the new database. This must be a unique name that differentiates the database from any other database in either the local database directory or the system database directory. The name must conform to naming conventions for databases. Specifically, the name must not contain any space characters. AT DBPARTITIONNUM Specifies that the database is to be created only on the database partition that issues the command. You do not specify this option when you create a new database. You can use it to recreate a database partition that you dropped because it was damaged. After you use the CREATE DATABASE command with the AT DBPARITIONNUM option, the database at this database partition is in the restore-pending state. You must immediately restore the database on this node. This parameter is not intended for general use. For example, it should be used with RESTORE DATABASE command if the database partition at a node was damaged and must be re-created. Improper use of this parameter can cause inconsistencies in the system, so it should only be used with caution. If this parameter is used to recreate a database partition that was dropped (because it was damaged), the database at this database partition will be in the restore-pending state. After recreating the database partition, the database must immediately be restored on this database partition. AUTOMATIC STORAGE NO|YES Specifies that automatic storage is being explicitly disabled or enabled for
Chapter 3. CLP Commands
397
CREATE DATABASE
the database. The default value is YES. If the AUTOMATIC STORAGE clause is not specified, automatic storage is implicitly enabled by default. NO YES Automatic storage is not being enabled for the database. Automatic storage is being enabled for the database.
ON path or drive The meaning of this option depends on the value of the AUTOMATIC STORAGE option. v If AUTOMATIC STORAGE NO is specified, automatic storage is disabled for the database. In this case, only one path can be included as part of the ON option, and it specifies the path on which to create the database. If a path is not specified, the database is created on the default database path that is specified in the database manager configuration file (dftdbpath parameter). This behavior matches that of DB2 UDB Version 8.2 and earlier. v Otherwise, automatic storage is enabled for the database by default. In this case, multiple paths may be listed here, each separated by a comma. These are referred to as storage paths and are used to hold table space containers for automatic storage table spaces. For multi-partition databases the same storage paths will be used on all partitions. With multiple paths, the DBPATH ON option specifies which of the multiple paths on which to create the database. If the DBPATH ON option is not specified, the database is created on the first path listed. If no paths are specified, the database is created on the default database path that is specified in the database manager configuration file (dftdbpath parameter). This will also be used as the location for the single storage path associated with the database. The maximum length of a path is 175 characters. For MPP systems, a database should not be created in an NFS-mounted directory. If a path is not specified, ensure that the dftdbpath database manager configuration parameter is not set to an NFS-mounted path (for example, on UNIX based systems, it should not specify the $HOME directory of the instance owner). The path specified for this command in an MPP system cannot be a relative path. Also, all paths specified as part of the ON option must exist on all database partitions. A given database path or storage path must exist and be accessible on each database partition. DBPATH ON path or drive If automatic storage is enabled, the DBPATH ON option specifies the path on which to create the database. If automatic storage is enabled and the DBPATH ON option is not specified, the database is created on the first path listed with the ON option. The maximum length of a database path is 215 characters and the maximum length of a storage path is 175 characters. ALIAS database-alias An alias for the database in the system database directory. If no alias is provided, the specified database name is used. USING CODESET codeset Specifies the code set to be used for data entered into this database. After you create the database, you cannot change the specified code set. To make use of XML features, the codeset must be set to UTF-8.
398
Command Reference
CREATE DATABASE
TERRITORY territory Specifies the territory to be used for data entered into this database. After you create the database, you cannot change the specified territory. COLLATE USING Identifies the type of collating sequence to be used for the database. Once the database has been created, the collating sequence cannot be changed. COMPATIBILITY The DB2 Version 2 collating sequence. Some collation tables have been enhanced. This option specifies that the previous version of these tables is to be used. IDENTITY Identity collating sequence, in which strings are compared byte for byte. This is the default for Unicode databases. IDENTITY_16BIT CESU-8 (Compatibility Encoding Scheme for UTF-16: 8-Bit) collation sequence as specified by the Unicode Technical Report #26, which is available at the Unicode Corsortium Web site (www.unicode.org). This option can only be specified when creating a Unicode database. UCA400_NO The UCA (Unicode Collation Algorithm) collation sequence that is based on the Unicode Standard version 4.00 with normalization implicitly set to on. Details of the UCA can be found in the Unicode Technical Standard #10, which is available at the Unicode Consortium Web site (www.unicode.org). This is the default collation when codeset UTF-8 is specified and can only be used when creating a Unicode database. UCA400_LSK The UCA (Unicode Collation Algorithm) collation sequence based on the Unicode Standard version 4.00 but will sort Slovakian characters in the appropriate order. Details of the UCA can be found in the Unicode Technical Standard #10, which is available at the Unicode Consortium Web site (www.unicode.org). This option can only be used when creating a Unicode database. UCA400_LTH The UCA (Unicode Collation Algorithm) collation sequence that is based on the Unicode Standard version 4.00 but will sort all Thai characters according to the Royal Thai Dictionary order. Details of the UCA can be found in the Unicode Technical Standard #10 available at the Unicode Consortium Web site (www.unicode.org). This option can only be used when creating a Unicode database. This collator might order Thai data differently from the NLSCHAR collator option. NLSCHAR System-defined collating sequence using the unique collation rules for the specific code set/territory. This option can only be used with the Thai code page (CP874). If this option is specified in non-Thai environments, the command will fail and return the error SQL1083N with Reason Code 4. SYSTEM For non-Unicode databases, this is the default option, with the
Chapter 3. CLP Commands
399
CREATE DATABASE
collating sequence based on the database territory. For Unicode databases, this option is equivalent to the IDENTITY option. PAGESIZE integer Specifies the page size of the default buffer pool along with the initial table spaces (SYSCATSPACE, TEMPSPACE1, USERSPACE1) when the database is created. This also represents the default page size for all future CREATE BUFFERPOOL and CREATE TABLESPACE statements. The valid values for integer without the suffix K are 4 096, 8 192, 16 384, or 32 768. The valid values for integer with the suffix K are 4, 8, 16, or 32. At least one space is required between the integer and the suffix K. The default is a page size of 4 096 bytes (4 K). NUMSEGS numsegs Specifies the number of directories (tablespace containers) that will be created and used to store the database table files for any default SMS table spaces. This parameter does not affect automatic storage table spaces, DMS table spaces, any SMS table spaces with explicit creation characteristics (created when the database is created), or any SMS table spaces explicitly created after the database is created. DFT_EXTENT_SZ dft_extentsize Specifies the default extent size of table spaces in the database. RESTRICTIVE If the RESTRICTIVE option is present it causes the RESTRICT_ACCESS database configuration parameter to be set to YES and no privileges are automatically granted to PUBLIC. If the RESTRICTIVE option is not present then the RESTRICT_ACCESS database configuration parameter is set to NO and all of the following privileges are automatically granted to PUBLIC. v CREATETAB v BINDADD v CONNECT v IMPLSCHEMA v EXECUTE with GRANT on all procedures in schema SQLJ v EXECUTE with GRANT on all functions and procedures in schema SYSPROC v BIND on all packages created in the NULLID schema v EXECUTE on all packages created in the NULLID schema v CREATEIN on schema SQLJ v CREATEIN on schema NULLID v USE on table space USERSPACE1 v SELECT access to the SYSIBM catalog tables v SELECT access to the SYSCAT catalog views v SELECT access to the SYSSTAT catalog views v UPDATE access to the SYSSTAT catalog views CATALOG TABLESPACE tblspace-defn Specifies the definition of the table space that will hold the catalog tables, SYSCATSPACE. If not specified and automatic storage is not enabled for the database, SYSCATSPACE is created as a System Managed Space (SMS) table space with numsegs number of directories as containers, and with an extent size of dft_extentsize. For example, the following containers would be created if numsegs were specified to be 5:
400
Command Reference
CREATE DATABASE
/u/smith/smith/NODE0000/SQL00001/SQLT0000.0 /u/smith/smith/NODE0000/SQL00001/SQLT0000.1 /u/smith/smith/NODE0000/SQL00001/SQLT0000.2 /u/smith/smith/NODE0000/SQL00001/SQLT0000.3 /u/smith/smith/NODE0000/SQL00001/SQLT0000.4
If not specified and automatic storage is enabled for the database, SYSCATSPACE is created as an automatic storage table space with its containers created on the defined storage paths. The extent size of this table space is 4. Appropriate values for AUTORESIZE, INITIALSIZE, INCREASESIZE, and MAXSIZE are set automatically. See the CREATE TABLESPACE statement for more information on the table space definition fields. In a partitioned database environment, the catalog table space is only created on the catalog database partition, the database partition on which the CREATE DATABASE command is issued. USER TABLESPACE tblspace-defn Specifies the definition of the initial user table space, USERSPACE1. If not specified and automatic storage is not enabled for the database, USERSPACE1 is created as an SMS table space with numsegs number of directories as containers and with an extent size of dft_extentsize. For example, the following containers would be created if numsegs were specified to be 5:
/u/smith/smith/NODE0000/SQL00001/SQLT0001.0 /u/smith/smith/NODE0000/SQL00001/SQLT0002.1 /u/smith/smith/NODE0000/SQL00001/SQLT0002.2 /u/smith/smith/NODE0000/SQL00001/SQLT0002.3 /u/smith/smith/NODE0000/SQL00001/SQLT0002.4
If not specified and automatic storage is enabled for the database, USERSPACE1 is created as an automatic storage table space with its containers created on the defined storage paths. The extent size of this table space will be dft_extentsize. Appropriate values for AUTORESIZE, INITIALSIZE, INCREASESIZE, and MAXSIZE are set automatically. See the CREATE TABLESPACE statement for more information on the table space definition fields. TEMPORARY TABLESPACE tblspace-defn Specifies the definition of the initial system temporary table space, TEMPSPACE1. If not specified and automatic storage is not enabled for the database, TEMPSPACE1 is created as an SMS table space with numsegs number of directories as containers and with an extent size of dft_extentsize. For example, the following containers would be created if numsegs were specified to be 5:
/u/smith/smith/NODE0000/SQL00001/SQLT0002.0 /u/smith/smith/NODE0000/SQL00001/SQLT0001.1 /u/smith/smith/NODE0000/SQL00001/SQLT0001.2 /u/smith/smith/NODE0000/SQL00001/SQLT0001.3 /u/smith/smith/NODE0000/SQL00001/SQLT0001.4
If not specified and automatic storage is enabled for the database, TEMPSPACE1 is created as an automatic storage table space with its containers created on the defined storage paths. The extent size of this table space is dft_extentsize.
Chapter 3. CLP Commands
401
CREATE DATABASE
See the CREATE TABLESPACE statement for more information on the table space definition fields. WITH comment-string Describes the database entry in the database directory. Any comment that helps to describe the database can be entered. Maximum length is 30 characters. A carriage return or a line feed character is not permitted. The comment text must be enclosed by single or double quotation marks. AUTOCONFIGURE Based on user input, calculates the recommended settings for buffer pool size, database configuration, and database manager configuration and optionally applies them. The Configuration Advisor is run by default when the CREATE DATABASE command is issued. The AUTOCONFIGURE option is needed only if you want to tweaks the recommendations. USING input-keyword param-value
Table 9. Valid input keywords and parameter values Keyword mem_percent Valid values 1100 Default value 25 Explanation Percentage of memory to dedicate. If other applications (other than the operating system) are running on this server, set this to less than 100. Simple workloads tend to be I/O intensive and mostly transactions, whereas complex workloads tend to be CPU intensive and mostly queries. Number of statements per unit of work Transactions per minute Optimize for better performance (more transactions per minute) or better recovery time Number of connected local applications Number of connected remote applications Isolation level of applications connecting to this database (Repeatable Read, Read Stability, Cursor Stability, Uncommitted Read)
workload_type
mixed
num_stmts
11 000 000
25
tpm admin_priority
60 both
0 100 RR
402
Command Reference
CREATE DATABASE
Table 9. Valid input keywords and parameter values (continued) Keyword bp_resizeable Valid values yes, no Default value yes Explanation Are buffer pools resizeable?
APPLY DB ONLY Displays the recommended values for the database configuration and the buffer pool settings based on the current database manager configuration. Applies the recommended changes to the database configuration and the buffer pool settings. DB AND DBM Displays and applies the recommended changes to the database manager configuration, the database configuration, and the buffer pool settings. NONE Disables the Configuration Advisor (it is enabled by default). v If the AUTOCONFIGURE keyword is specified with the CREATE DATABASE command, the DB2_ENABLE_AUTOCONFIG_DEFAULT variable value is not considered. Adaptive Self Tuning Memory and Auto Runstats will be enabled and the Configuration Advisor will tune the database configuration and database manager configuration parameters as indicated by the APPLY DB or APPLY DBM options. v Specifying the AUTOCONFIGURE option with the CREATE DATABASE command on a database will recommend enablement of the Self Tuning Memory Manager. However, if you run the AUTOCONFIGURE command on a database in an instance where SHEAPTHRES is not zero, sort memory tuning (SORTHEAP) will not be enabled automatically. To enable sort memory tuning (SORTHEAP), you must set SHEAPTHRES equal to zero using the UPDATE DATABASE MANAGER CONFIGURATION command. Note that changing the value of SHEAPTHRES may affect the sort memory usage in your previously existing databases. Usage notes: The CREATE DATABASE command: v Creates a database in the specified subdirectory. In a partitioned database environment, creates the database on all database partitions listed in db2nodes.cfg, and creates a $DB2INSTANCE/NODExxxx directory under the specified subdirectory at each database partition. In a single partition database environment, creates a $DB2INSTANCE/NODE0000 directory under the specified subdirectory. v Creates the system catalog tables and recovery log. v Catalogs the database in the following database directories: Servers local database directory on the path indicated by path or, if the path is not specified, the default database path defined in the database manager system configuration file by the dftdbpath parameter. A local database directory resides on each file system that contains a database. Servers system database directory for the attached instance. The resulting directory entry will contain the database name and a database alias.
Chapter 3. CLP Commands
403
CREATE DATABASE
If the command was issued from a remote client, the clients system database directory is also updated with the database name and an alias. Creates a system or a local database directory if neither exists. If specified, the comment and code set values are placed in both directories. v Stores the specified code set, territory, and collating sequence. A flag is set in the database configuration file if the collating sequence consists of unique weights, or if it is the identity sequence. v Creates the schemas called SYSCAT, SYSFUN, SYSIBM, and SYSSTAT with SYSIBM as the owner. The database partition server on which this command is issued becomes the catalog database partition for the new database. Two database partition groups are created automatically: IBMDEFAULTGROUP and IBMCATGROUP. v Binds the previously defined database manager bind files to the database (these are listed in the utilities bind file list, db2ubind.lst). If one or more of these files do not bind successfully, CREATE DATABASE returns a warning in the SQLCA, and provides information about the binds that failed. If a bind fails, the user can take corrective action and manually bind the failing file. The database is created in any case. A schema called NULLID is implicitly created when performing the binds with CREATEIN privilege granted to PUBLIC. The utilities bind file list contains two bind files that cannot be bound against down-level servers: db2ugtpi.bnd cannot be bound against DB2 Version 2 servers. db2dropv.bnd cannot be bound against DB2 Parallel Edition Version 1 servers. If db2ubind.lst is bound against a down-level server, warnings pertaining to these two files are returned, and can be disregarded. v Creates SYSCATSPACE, TEMPSPACE1, and USERSPACE1 table spaces. The SYSCATSPACE table space is only created on the catalog database partition. v Grants the following: EXECUTE WITH GRANT privilege to PUBLIC on all functions in the SYSFUN schema EXECUTE privilege to PUBLIC on all procedures in SYSIBM schema DBADM authority, and CONNECT, CREATETAB, BINDADD, CREATE_NOT_FENCED, IMPLICIT_SCHEMA and LOAD privileges to the database creator CONNECT, CREATETAB, BINDADD, and IMPLICIT_SCHEMA privileges to PUBLIC USE privilege on the USERSPACE1 table space to PUBLIC SELECT privilege on each system catalog to PUBLIC BIND and EXECUTE privilege to PUBLIC for each successfully bound utility. EXECUTE WITH GRANT privilege to PUBLIC on all functions in the SYSFUN schema. EXECUTE privilege to PUBLIC on all procedures in SYSIBM schema. Automatic storage is a collection of storage paths associated with a database on which table spaces can be created without having to explicitly specify container definitions (see the CREATE TABLESPACE statement for more information). Automatic storage is enabled by default, but can be explicitly disabled for a database when it is created. Automatic storage can be disbled at database creation time by specifying the AUTOMATIC STORAGE NO option.
404
Command Reference
CREATE DATABASE
It is important to note that automatic storage can only be enabled at database creation time, it cannot be enabled after the database has been created. Also, automatic storage cannot be disabled once a database has been defined to use it. When free space is calculated for an automatic storage path for a given database partition, the database manager will check for the existence of the following directories or mount points within the storage path and will use the first one that is found. In doing this, file systems can be mounted at a point beneath the storage path and the database manager will recognize that the actual amount of free space available for table space containers may not be the same amount that is associated with the storage path directory itself. 1. <storage path>/<instance name>/NODE####/<database name> 2. <storage path>/<instance name>/NODE#### 3. <storage path>/<instance name> 4. <storage path>/< Where v <storage path> is a storage path associated with the database. v <instance name> is the instance under which the database resides. v NODE#### corresponds to the database partition number (for example NODE0000 or NODE0001). v <database name> is the name of the database. Consider the example where two logical database partitions exist on one physical machine and the database is being created with a single storage path: /db2data. Each database partition will use this storage path but the user may wish to isolate the data from each partition within its own file system. In this case, a separate file system can be created for each partition and be mounted at /db2data/<instance>/ NODE####. When creating containers on the storage path and determining free space, the database manager will know not to retrieve free space information for /db2data, but instead retrieve it for the corresponding /db2data/<instance>/ NODE#### directory. In general, the same storage paths must be used for each partition in a multi-partition database and they must all exist prior to executing the CREATE DATABASE command. One exception to this is where database partition expressions are used within the storage path. Doing this allows the database partition number to be reflected in the storage path such that the resulting path name is different on each partition. You use the argument $N ([blank]$N) to indicate a database partition expression. A database partition expression can be used anywhere in the storage path, and multiple database partition expressions can be specified. Terminate the database partition expression with a space character; whatever follows the space is appended to the storage path after the database partition expression is evaluated. If there is no space character in the storage path after the database partition expression, it is assumed that the rest of the string is part of the expression. The argument can only be used in one of the following forms:
Operators are evaluated from left to right. % represents the modulus operator. The database partition number in the examples is assumed to be 10.
Syntax [blank]$N Example " $N" Value 10
405
CREATE DATABASE
Operators are evaluated from left to right. % represents the modulus operator. The database partition number in the examples is assumed to be 10.
Syntax [blank]$N+[number] [blank]$N%[number] [blank]$N+[number]%[number] [blank]$N%[number]+[number]
a
Value 110 0 1 4
% is modulus.
With dbadm authority, one can grant these privileges to (and revoke them from) other users or PUBLIC. If another administrator with sysadm or dbadm authority over the database revokes these privileges, the database creator nevertheless retains them. In an MPP environment, the database manager creates a subdirectory, $DB2INSTANCE/NODExxxx, under the specified or default path on all database partitions. The xxxx is the database partition number as defined in the db2nodes.cfg file (that is, database partition 0 becomes NODE0000). Subdirectories SQL00001 through SQLnnnnn will reside on this path. This ensures that the database objects associated with different database partitions are stored in different directories (even if the subdirectory $DB2INSTANCE under the specified or default path is shared by all database partitions). If LDAP (Lightweight Directory Access Protocol) support is enabled on the current machine, the database will be automatically registered in the LDAP directory. If a database object of the same name already exists in the LDAP directory, the database is still created on the local machine, but a warning message is returned, indicating that there is a naming conflict. In this case, the user can manually catalog an LDAP database entry by using the CATALOG LDAP DATABASE command. CREATE DATABASE will fail if the application is already connected to a database. When a database is created, a detailed deadlocks event monitor is created. As with any monitor, there is some overhead associated with this event monitor. You can drop the deadlocks event monitor by issuing the DROP EVENT MONITOR command. Use CATALOG DATABASE to define different alias names for the new database. Examples: Here are several examples of the CREATE DATABASE command: Example 1:
CREATE DATABASE TESTDB3 AUTOMATIC STORAGE YES
Database TESTDB3 is created on the drive that is the value of database manager configuration parameter dftdbpath. Automatic storage is enabled with a single storage path that also has the value of dftdbpath. Example 2:
CREATE DATABASE TESTDB7 ON C:,D:
406
Command Reference
CREATE DATABASE
Database TESTDB7 is created on drive C: (first drive in storage path list). Automatic storage is implicitly enabled and the storage paths are C: and D:. Example 3:
CREATE DATABASE TESTDB15 AUTOMATIC STORAGE YES ON C:,D: DBPATH ON E:
Database TESTDB15 is created on drive E: (explicitly listed as DBPATH). Automatic storage is explicitly enabled and the storage paths are C: and D:. Example 4:
CREATE DATABASE TESTDB9 ON C: USING CODESET UTF-8 TERRITORY US
Database TESTDB9 is created on drive C:. The codeset is set to UTF-8, enabling the use of native XML functionality on the database. Compatibilities: For compatibility with versions earlier than Version 8: v The keyword NODE can be substituted for DBPARTITIONNUM. Related concepts: v Isolation levels in SQL Reference, Volume 1 v Automatic storage databases in Administration Guide: Implementation v Unicode implementation in DB2 Database for Linux, UNIX, and Windows in Administration Guide: Planning Related tasks: v Creating a database in Administration Guide: Implementation v Securing the system catalog view in Administration Guide: Implementation v Collating Thai characters in Administration Guide: Planning Related reference: v AUTOCONFIGURE on page 346 v BIND on page 355 v CATALOG DATABASE on page 372 v CATALOG LDAP DATABASE on page 377 v DROP DATABASE on page 426 v sqlecran API - Create a database on a database partition server in Administrative API Reference v sqlecrea API - Create database in Administrative API Reference v CREATE TABLESPACE statement in SQL Reference, Volume 2 v RESTORE DATABASE on page 675 v auto_maint - Automatic maintenance configuration parameter in Performance Guide v Miscellaneous variables in Performance Guide v self_tuning_mem- Self tuning memory configuration parameter in Performance Guide
407
Command parameters: CATALOG catalog-name A name to be used to uniquely identify the DB2 tools catalog. The catalog tables are created under this schema name. NEW DATABASE database-name A name to be assigned to the new database. This must be a unique name that differentiates the database from any other database in either the local database directory or the system database directory. The name must conform to naming conventions for databases.
408
Command Reference
Usage notes: v The tools catalog tables require two 32K page table spaces (regular and temporary). In addition, unless you specify existing table spaces, a new 32K buffer pool is created for the table spaces. This requires a restart of the database manager. If the database manager must be restarted, all existing applications must be forced off. The new table spaces are created with a single container each in the default database directory path. v If an active catalog with this name exists before you execute this command, it is deactivated and the new catalog becomes the active catalog. v Multiple DB2 tools catalogs can be created in the same database and are uniquely identified by the catalog name. v The jdk_path configuration parameter must be set in the DB2 administration server (DAS) configuration to the minimum supported level of the SDK for Java. v Updating the DAS configuration parameters requires dasadm authority on the DB2 administration server. v Unless you specify the KEEP INACTIVE option, this command updates the local DAS configuration parameters related to the DB2 tools catalog database configuration and enables the scheduler at the local DAS server. v The jdk_64_path configuration parameter must be set if you are creating a tools catalog against a 64-bit instance on one of the platforms that supports both 32and 64-bit instances (AIX, HP-UX, and Solaris). Related concepts: v DB2 Administration Server in Administration Guide: Implementation Related reference:
Chapter 3. CLP Commands
409
410
Command Reference
DEACTIVATE DATABASE
DEACTIVATE DATABASE
Stops the specified database. Scope: In an MPP system, this command deactivates the specified database on all database partitions in the system. If one or more of these database partitions encounters an error, a warning is returned. The database will be successfully deactivated on some database partitions, but might continue to be active on the nodes encountering the error. Authorization: One of the following: v sysadm v sysctrl v sysmaint Required connection: None Command syntax:
DEACTIVATE DATABASE DB database-alias
Command parameters: DATABASE database-alias Specifies the alias of the database to be stopped. USER username Specifies the user stopping the database. USING password Specifies the password for the user ID. Usage notes: Databases initialized by ACTIVATE DATABASE can be shut down by DEACTIVATE DATABASE or by db2stop. If a database was initialized by ACTIVATE DATABASE, the last application disconnecting from the database will not shut down the database, and DEACTIVATE DATABASE must be used. (In this case, db2stop will also shut down the database.) The application issuing the DEACTIVATE DATABASE command cannot have an active database connection to any database. Related concepts: v Quick-start tips for performance tuning in Performance Guide
Chapter 3. CLP Commands
411
DEACTIVATE DATABASE
Related reference: v STOP DATABASE MANAGER on page 736 v ACTIVATE DATABASE on page 330 v sqle_deactivate_db API - Deactivate database in Administrative API Reference
412
Command Reference
VALIDATE
Command parameters: DECOMPOSE XML DOCUMENT xml-document-name xml-document-name is the file path and file name of the input XML document to be decomposed. XMLSCHEMA xml-schema-name xml-schema-name is the name of an existing XML schema registered with the XML schema repository to be used for document decomposition. xml-schema-name is a qualified SQL identifier consisting of an optional SQL schema name followed by a period and the XML schema name. If the SQL schema name is not specified, it is assumed to be the value of the DB2 special register CURRENT SCHEMA. VALIDATE This parameter indicates that the input XML document is to be validated first, then decomposed only if the document is valid. If VALIDATE is not specified, the input XML document will not be validated before decomposition. Examples: The following example specifies that the XML document ~./gb/document1.xml is to be validated and decomposed with the registered XML schema DB2INST1.GENBANKSCHEMA.
DECOMPOSE XML DOCUMENT ./gb/document1.xml XMLSCHEMA DB2INST1.GENBANKSCHEMA VALIDATE
Chapter 3. CLP Commands
413
Related concepts: v XML schema, DTD, and external entity management using the XML schema repository (XSR) in XML Guide
414
Command Reference
DEREGISTER
DEREGISTER
Deregisters the DB2 server from the network directory server. Authorization: None Required connection: None Command syntax:
DEREGISTER DB2 SERVER LDAP NODE nodename USER username PASSWORD password IN
Command parameters: IN Specifies the network directory server from which to deregister the DB2 server. The valid value is LDAP for an LDAP (Lightweight Directory Access Protocol) directory server.
USER username This is the users LDAP distinguished name (DN). The LDAP user DN must have sufficient authority to delete the object from the LDAP directory. The user name is optional when deregistering in LDAP. If the users LDAP DN is not specified, the credentials of the current logon user will be used. PASSWORD password Account password. NODE nodename The node name is the value that was specified when the DB2 server was registered in LDAP. Usage notes: This command can only be issued for a remote machine when in the LDAP environment. When issued for a remote machine, the node name of the remote server must be specified. The DB2 server is automatically deregistered when the instance is dropped. Related concepts: v Lightweight Directory Access Protocol (LDAP) overview in Administration Guide: Implementation Related tasks: v Deregistering the DB2 server in Administration Guide: Implementation Related reference:
415
DEREGISTER
v db2LdapDeregister API - Deregister the DB2 server and cataloged databases from the LDAP server in Administrative API Reference v REGISTER on page 637 v UPDATE LDAP NODE on page 780
416
Command Reference
DESCRIBE
DESCRIBE
This command: v Displays the output SQLDA information about a SELECT, CALL, or XQuery statement v Displays columns of a table or a view v Displays indexes of a table or a view v Displays data partitions of a table or view Authorization: To display the output SQLDA information about a SELECT statement, one of the privileges or authorities listed below for each table or view referenced in the SELECT statement is required. To display the columns, indexes or data partitions of a table or a view, SELECT privilege, CONTROL privilege, sysadm authority or dbadm authority is required for the following system catalogs: v SYSCAT.COLUMNS (DESCRIBE TABLE), SYSCAT.DATAPARTITIONEXPRESSION (with SHOW DETAIL) v SYSCAT.INDEXES (DESCRIBE INDEXES FOR TABLE) execute privilege on GET_INDEX_COLNAMES() UDF (with SHOW DETAIL) v SYSCAT.DATAPARTITIONS (DESCRIBE DATA PARTITIONS FOR TABLE) As PUBLIC has all the privileges over declared global temporary tables, a user can use the command to display information about any declared global temporary table that exists within its connection. To display the output SQLDA information about a CALL statement, one of the privileges or authorities listed below is required: v EXECUTE privilege on the stored procedure v sysadm or dbadm authority Required connection: Database. If implicit connect is enabled, a connection to the default database is established. Command syntax:
OUTPUT DESCRIBE select-statement call-statement XQUERY XQuery-statement TABLE table-name INDEXES FOR TABLE DATA PARTITIONS FOR TABLE
SHOW DETAIL
Command parameters: OUTPUT Indicates that the output of the statement should be described. This keyword is optional.
417
DESCRIBE
select-statement, call-statement, or XQUERY XQuery-statement Identifies the statement about which information is wanted. The statement is automatically prepared by CLP. To identify an XQuery statement, precede the statement with the keyword XQUERY. TABLE table-name Specifies the table or view to be described. The fully qualified name in the form schema.table-name must be used. An alias for the table cannot be used in place of the actual table. The schema is the user name under which the table or view was created. The DESCRIBE TABLE command lists the following information about each column: v Column name v Type schema v Type name v Length v Scale v Nulls (yes/no) INDEXES FOR TABLE table-name Specifies the table or view for which indexes need to be described. The fully qualified name in the form schema.table-name must be used. An alias for the table cannot be used in place of the actual table. The schema is the user name under which the table or view was created. The DESCRIBE INDEXES FOR TABLE command lists the following information about each index of the table or view: v Index schema v Index name v Unique rule v Column count DATA PARTITIONS FOR TABLE table-name Specifies the table or view for which data partitions need to be described. The information displayed for each data partition in the table includes; the partition identifier and the partitioning intervals. Results are ordered according to the partition identifier sequence. The fully qualified name in the form schema.table-name must be used. An alias for the table cannot be used in place of the actual table. The schema is the user name under which the table or view was created. SHOW DETAIL For the DESCRIBE TABLE command, specifies that output include the following additional information v Whether a CHARACTER, VARCHAR or LONG VARCHAR column was defined as FOR BIT DATA v Column number v Distribution key sequence v Code page v Default v Table partitioning type (for tables partitioned by range this output appears below the original output)
418
Command Reference
DESCRIBE
v Partitioning key columns (for tables partitioned by range this output appears below the original output) For the DESCRIBE INDEXES FOR TABLE command, specifies that output include the following additional information: v Column names For the DESCRIBE DATA PARTITIONS FOR TABLE command, specifies that output include a second table with the following additional information: v Data partition sequence identifier v Data partition expression in SQL Examples: Describing the output a SELECT Statement The following example shows how to describe a SELECT statement:
db2 "describe output select * from staff"
SQLDA Information sqldaid :SQLDA sqldabc:896 sqln:20 sqld:7 Column Information sqltype sqllen sqlname.data sqlname.length -------------------- ------ ------------------------------ -------------500 SMALLINT 2 ID 2 449 VARCHAR 9 NAME 4 501 SMALLINT 2 DEPT 4 453 CHARACTER 5 JOB 3 501 SMALLINT 2 YEARS 5 485 DECIMAL 7,2 SALARY 6 485 DECIMAL 7,2 COMM 4
Describing the output a CALL Statement Given a stored procedure created with the statement:
CREATE PROCEDURE GIVE_BONUS (IN EMPNO INTEGER, IN DEPTNO INTEGER, OUT CHEQUE INTEGER, INOUT BONUS DEC(6,0)) ...
The following example shows how to describe the output of a CALL statement:
419
DESCRIBE
db2 "describe output call give_bonus(123456, 987, ?, 15000.)" SQLDA Information sqldaid :SQLDA sqldabc:896 sqln:20 sqld:2 Column Information sqltype sqllen sqlname.data sqlname.length -------------------- ------ ------------------------------ -------------497 INTEGER 4 485 DECIMAL 6,0
Describing the output of an XQuery Statement Given a table named CUSTOMER that has a column named INFO of the XML data type, the following example shows how to describe an XQuery statement:
db2 describe xquery for $cust in db2-fn:xmlcolumn("CUSTOMER.INFO") return $cust
SQLDA Information sqldaid : SQLDA Column Information sqltype sqllen sqlname.data sqlname.length -------------------- ------ ------------------------------ -------------998 XML 0 1 1 sqldabc: 1136 sqln: 20 sqld: 1
If the DESCRIBE XQUERY command is issued against a downlevel server that does not support the XQUERY option, the message DB21108E is returned to indicate that the functionality is not supported by the downlevel server. Describing a Table The following example shows how to describe a table:
db2 describe table user1.department
Table: USER1.DEPARTMENT Column name -----------------AREA DEPT DEPTNAME Type schema ----------SYSIBM SYSIBM SYSIBM Type name Length Scale Nulls ------------------ -------- -------- -------SMALLINT 2 0 No CHARACTER 3 0 No CHARACTER 20 0 Yes
The following example shows how to describe a table with details. If the table is partitioned, as in this example, additional details appear below the existing output. For a non-partitioned table, the additional table heading is not displayed:
db2 describe table user1.employee show detail
420
Command Reference
DESCRIBE
Type Column Type schema number name Length ----------- --------- ----------- -------SYSIBM 0 CHARACTER 10 SYSIBM 1 CHARACTER 10
Describing a Table Index The following example shows how to describe a table index:
db2 describe indexes for table user1.department
Table: USER1.DEPARTMENT Index schema -------------USER1 Index Unique Number of name rule columns ------------------ -------------- -------------IDX1 U 2
Describing Data Partitions The following example shows how to describe data partitions:
db2 describe data partitions for table user1.sales
PartitionId ------------0 1 3
Describing the data partitions with details returns the same output as in the previous example including an additional table showing the Partition Id and table space where the data for the data partition is stored:
db2 describe data partitions for table user1.employee show detail
Inclusive (y/n) High Value -- ------------Y beck,kevin N treece,jeff Y zhang,liping Y MAXVALUE,MAXVALUE ObjectId AccessMode Status -------- ---------- -----F N A F N A
421
DESCRIBE
Related reference: v ADMIN_CMD procedure Run administrative commands in Administrative SQL Routines and Views v DESCRIBE command using the ADMIN_CMD procedure in Administrative SQL Routines and Views v SYSCAT.DATAPARTITIONS catalog view in SQL Reference, Volume 1
422
Command Reference
DETACH
DETACH
Removes the logical DBMS instance attachment, and terminates the physical communication connection if there are no other logical connections using this layer. Authorization: None Required connection: None. Removes an existing instance attachment. Command syntax:
DETACH
Command parameters: None Related tasks: v Attaching to and detaching from a non-default instance of the database manager in Administration Guide: Implementation Related reference: v ATTACH on page 344 v sqledtin API - Detach from instance in Administrative API Reference
423
DROP CONTACT
DROP CONTACT
Removes a contact from the list of contacts defined on the local system. A contact is a user to whom the Scheduler and Health Monitor send messages. The setting of the Database Administration Server (DAS) contact_host configuration parameter determines whether the list is local or global. Authorization: None. Required connection: None. Local execution only: this command cannot be used with a remote connection. Command syntax:
DROP CONTACT name
Command parameters: CONTACT name The name of the contact that will be dropped from the local system. Related reference: v db2DropContact API - Remove a contact from the list of contacts to whom notification messages can be sent in Administrative API Reference v DROP CONTACT command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
424
Command Reference
DROP CONTACTGROUP
DROP CONTACTGROUP
Removes a contact group from the list of contacts defined on the local system. A contact group contains a list of users to whom the Scheduler and Health Monitor send messages. The setting of the Database Administration Server (DAS) contact_host configuration parameter determines whether the list is local or global. Authorization: None. Required Connection: None. Command Syntax:
DROP CONTACTGROUP name
Command Parameters: CONTACTGROUP name The name of the contact group that will be dropped from the local system. Related reference: v db2DropContactGroup API - Remove a contact group from the list of contacts to whom notification messages can be sent in Administrative API Reference v DROP CONTACTGROUP command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
425
DROP DATABASE
DROP DATABASE
Deletes the database contents and all log files for the database, uncatalogs the database, and deletes the database subdirectory. Scope: By default, this command affects all database partitions that are listed in the db2nodes.cfg file. Authorization: One of the following: v sysadm v sysctrl Required connection: Instance. An explicit attachment is not required. If the database is listed as remote, an instance attachment to the remote node is established for the duration of the command. Command syntax:
DROP DATABASE DB database-alias AT DBPARTITIONNUM
Command parameters: DATABASE database-alias Specifies the alias of the database to be dropped. The database must be cataloged in the system database directory. AT DBPARTITIONNUM Specifies that the database is to be deleted only on the database partition that issued the DROP DATABASE command. This parameter is used by utilities supplied with DB2 ESE, and is not intended for general use. Improper use of this parameter can cause inconsistencies in the system, so it should only be used with caution. Examples: The following example deletes the database referenced by the database alias SAMPLE:
db2 drop database sample
Usage notes: DROP DATABASE deletes all user data and log files, as well as any back/restore history for the database. If the log files are needed for a roll-forward recovery after a restore operation, or the backup history required to restore the database, these files should be saved prior to issuing this command. The database must not be in use; all users must be disconnected from the database before the database can be dropped.
426
Command Reference
DROP DATABASE
To be dropped, a database must be cataloged in the system database directory. Only the specified database alias is removed from the system database directory. If other aliases with the same database name exist, their entries remain. If the database being dropped is the last entry in the local database directory, the local database directory is deleted automatically. If DROP DATABASE is issued from a remote client (or from a different instance on the same machine), the specified alias is removed from the clients system database directory. The corresponding database name is removed from the servers system database directory. This command unlinks all files that are linked through any DATALINK columns. Since the unlink operation is performed asynchronously on the DB2 Data Links Manager, its effects might not be seen immediately on the DB2 Data Links Manager, and the unlinked files might not be immediately available for other operations. When the command is issued, all the DB2 Data Links Managers configured to that database must be available; otherwise, the drop database operation will fail. Compatibilities: For compatibility with versions earlier than Version 8: v The keyword NODE can be substituted for DBPARTITIONNUM. Related tasks: v Dropping a database in Administration Guide: Implementation Related reference: v CREATE DATABASE on page 395 v UNCATALOG DATABASE on page 745 v sqledpan API - Drop a database on a database partition server in Administrative API Reference v sqledrpd API - Drop database in Administrative API Reference v CATALOG DATABASE on page 372
427
Command parameters: None Usage notes: If a message is returned, indicating that the database partition is not in use, use the STOP DATABASE MANAGER command with DROP DBPARTITIONNUM to remove the entry for the database partition from the db2nodes.cfg file, which removes the database partition from the database system. If a message is returned, indicating that the database partition is in use, the following actions should be taken: 1. If the database partition contains data, redistribute the data to remove it from the database partition using REDISTRIBUTE DATABASE PARTITION GROUP. Use either the DROP DBPARTITIONNUM option on the REDISTRIBUTE DATABASE PARTITION GROUP command or on the ALTER DATABASE PARTITION GROUP statement to remove the database partition from any database partition groups for the database. This must be done for each database that contains the database partition in a database partition group. 2. Drop any event monitors that are defined on the database partition. 3. Rerun DROP DBPARTITIONNUM VERIFY to ensure that the database is no longer in use. Compatibilities: For compatibility with versions earlier than Version 8: v The keyword NODE can be substituted for DBPARTITIONNUM. Related reference: v STOP DATABASE MANAGER on page 736 v REDISTRIBUTE DATABASE PARTITION GROUP on page 633 v sqledrpn API - Check whether a database partition server can be dropped in Administrative API Reference
428
Command Reference
Command parameters: CATALOG catalog-name A name to be used to uniquely identify the DB2 tools catalog. The catalog tables are dropped from this schema. DATABASE database-name A name to be used to connect to the local database containing the catalog tables. FORCE The force option is used to force the DB2 administration servers scheduler to stop. If this is not specified, the tools catalog will not be dropped if the scheduler cannot be stopped. Examples:
db2 drop tools catalog cc in database toolsdb db2 drop tools catalog in database toolsdb force
Usage notes: v The jdk_path configuration parameter must be set in the DB2 administration server (DAS) configuration to the minimum supported level of the SDK for Java. v This command will disable the scheduler at the local DAS and reset the DAS configuration parameters related to the DB2 tools catalog database configuration.
Chapter 3. CLP Commands
429
430
Command Reference
ECHO
ECHO
Permits the user to write character strings to standard output. Authorization: None Required connection: None Command syntax:
ECHO character-string
Command parameters: character-string Any character string. Usage notes: If an input file is used as standard input, or comments are to be printed without being interpreted by the command shell, the ECHO command will print character strings directly to standard output. One line is printed each time that ECHO is issued. The ECHO command is not affected by the verbose (-v) option.
431
EDIT
EDIT
Launches a user-specified editor with a specified command for editing. When the user finishes editing, saves the contents of the editor and exits the editor, permits the user to execute the command in CLP interactive mode. Scope This command can only be run within CLP interactive mode. Specifically, it cannot be run from the CLP command mode or the CLP batch mode. Authorization: None Required connection: None Command syntax:
EDIT E
EDITOR editor
num
Command parameters: EDITOR Launch the editor specified for editing. If this parameter is not specified, the editor to be used is determined in the following order: 1. the editor specified by the DB2_CLP_EDITOR registry variable 2. the editor specified by the VISUAL environment variable 3. the editor specified by the EDITOR environment variable 4. On Windows operating systems, the Notepad editor; on UNIX operating systems, the vi editor num If num is positive, launches the editor with the command corresponding to num. If num is negative, launches the editor with the command corresponding to num, counting backwards from the most recent command in the command history. Zero is not a valid value for num. If this parameter is not specified, launches the editor with the most recently run command. (This is equivalent to specifying a value of -1 for num.)
Usage notes: 1. The editor specified must be a valid editor contained in the PATH of the operating system. 2. You can view a list of the most recently run commands available for editing by executing the HISTORY command. 3. The EDIT command will never be recorded in the command history. However, if you choose to run a command that was edited using the EDIT command, this command will be recorded in the command history. Related reference: v HISTORY on page 493
432
Command Reference
EXPORT
EXPORT
Exports data from a database to one of several external file formats. The user specifies the data to be exported by supplying an SQL SELECT statement, or by providing hierarchical information for typed tables. Authorization: One of the following: v sysadm v dbadm or CONTROL or SELECT privilege on each participating table or view. Required connection: Database. If implicit connect is enabled, a connection to the default database is established. Utility access to Linux, UNIX, or Windows database servers from Linux, UNIX, or Windows clients must be a direct connection through the engine and not through a DB2 Connect gateway or loop back environment. Command syntax:
EXPORT TO filename OF filetype , LOBS TO lob-path
, xml-path
XMLSAVESCHEMA METHOD N (
, column-name )
where-clause
433
EXPORT
traversal-order-list:
, ( sub-table-name )
Command parameters: HIERARCHY traversal-order-list Export a sub-hierarchy using the specified traverse order. All sub-tables must be listed in PRE-ORDER fashion. The first sub-table name is used as the target table name for the SELECT statement. HIERARCHY STARTING sub-table-name Using the default traverse order (OUTER order for ASC, DEL, or WSF files, or the order stored in PC/IXF data files), export a sub-hierarchy starting from sub-table-name. LOBFILE filename Specifies one or more base file names for the LOB files. When name space is exhausted for the first name, the second name is used, and so on. The maximum number of file names that can be specified is 999. This will implicitly activate the LOBSINFILE behavior. When creating LOB files during an export operation, file names are constructed by appending the current base name from this list to the current path (from lob-path), and then appending a 3-digit sequence number and the three character identifier lob. For example, if the current LOB path is the directory /u/foo/lob/path/, and the current LOB file name is bar, the LOB files created will be /u/foo/lob/path/bar.001.lob, /u/foo/lob/path/bar.002.lob, and so on. LOBS TO lob-path Specifies one or more paths to directories in which the LOB files are to be stored. There will be at least one file per LOB path, and each file will contain at least one LOB. The maximum number of paths that can be specified is 999. This will implicitly activate the LOBSINFILE behavior. MESSAGES message-file Specifies the destination for warning and error messages that occur during an export operation. If the file already exists, the export utility appends the information. If message-file is omitted, the messages are written to standard output. METHOD N column-name Specifies one or more column names to be used in the output file. If this parameter is not specified, the column names in the table are used. This parameter is valid only for WSF and IXF files, but is not valid when exporting hierarchical data. MODIFIED BY filetype-mod Specifies file type modifier options. See File type modifiers for the export utility. OF filetype Specifies the format of the data in the output file: v DEL (delimited ASCII format), which is used by a variety of database manager and file manager programs. v WSF (work sheet format), which is used by programs such as: Lotus 1-2-3
434
Command Reference
EXPORT
Lotus Symphony When exporting BIGINT or DECIMAL data, only values that fall within the range of type DOUBLE can be exported accurately. Although values that do not fall within this range are also exported, importing or loading these values back might result in incorrect data, depending on the operating system. v IXF (integrated exchange format, PC version), in which most of the table attributes, as well as any existing indexes, are saved in the IXF file, except when columns are specified in the SELECT statement. With this format, the table can be recreated, while with the other file formats, the table must already exist before data can be imported into it. select-statement Specifies the SELECT or XQUERY statement that will return the data to be exported. If the statement causes an error, a message is written to the message file (or to standard output). If the error code is one of SQL0012W, SQL0347W, SQL0360W, SQL0437W, or SQL1824W, the export operation continues; otherwise, it stops. TO filename Specifies the name of the file to which data is to be exported. If the complete path to the file is not specified, the export utility uses the current directory and the default drive as the destination. If the name of a file that already exists is specified, the export utility overwrites the contents of the file; it does not append the information. XMLFILE filename Specifies one or more base file names for the XML files. When name space is exhausted for the first name, the second name is used, and so on. When creating XML files during an export operation, file names are constructed by appending the current base name from this list to the current path (from xml-path), appending a 3-digit sequence number, and appending the three character identifier xml. For example, if the current XML path is the directory /u/foo/xml/path/, and the current XML file name is bar, the XML files created will be /u/foo/xml/path/bar.001.xml, /u/foo/xml/path/bar.002.xml, and so on. XML TO xml-path Specifies one or more paths to directories in which the XML files are to be stored. There will be at least one file per XML path, and each file will contain at least one XQuery Data Model (QDM) instance. If more than one path is specified, then QDM instances are distributed evenly among the paths. XMLSAVESCHEMA Specifies that XML schema information should be saved for all XML columns. For each exported XML document that was validated against an XML schema when it was inserted, the fully qualified SQL identifier of that schema will be stored as an (SCH) attribute inside the corresponding XML Data Specifier (XDS). If the exported document was not validated against an XML schema or the schema object no longer exists in the database, an SCH attribute will not be included in the corresponding XDS. The schema and name portions of the SQL identifier are stored as the OBJECTSCHEMA and OBJECTNAME values in the row of the SYSCAT.XSROBJECTS catalog table corresponding to the XML schema.
435
EXPORT
The XMLSAVESCHEMA option is not compatible with XQuery sequences that do not produce well-formed XML documents. Examples: The following example shows how to export information from the STAFF table in the SAMPLE database to the file myfile.ixf. The output will be in IXF format. You must be connected to the SAMPLE database before issuing the command. The index definitions (if any) will be stored in the output file except when the database connection is made through DB2 Connect.
db2 export to myfile.ixf of ixf messages msgs.txt select * from staff
The following example shows how to export the information about employees in Department 20 from the STAFF table in the SAMPLE database. The output will be in IXF format and will go into the awards.ixf file. You must first connect to the SAMPLE database before issuing the command. Also, the actual column name in the table is dept instead of department.
db2 export to awards.ixf of ixf messages msgs.txt select * from staff where dept = 20
The following example shows how to export LOBs to a DEL file, specifying a second directory for files that might not fit into the first directory:
db2 export to myfile.del of del lobs to /db2exp1/, /db2exp2/ modified by lobsinfile select * from emp_photo
The following example shows how to export data to a DEL file, using a single quotation mark as the string delimiter, a semicolon as the column delimiter, and a comma as the decimal point. The same convention should be used when importing data back into the database:
db2 export to myfile.del of del modified by chardel coldel; decpt, select * from staff
Usage notes: v Be sure to complete all table operations and release all locks before starting an export operation. This can be done by issuing a COMMIT after closing all cursors opened WITH HOLD, or by issuing a ROLLBACK. v Table aliases can be used in the SELECT statement. v The messages placed in the message file include the information returned from the message retrieval service. Each message begins on a new line. v The export utility produces a warning message whenever a character column with a length greater than 254 is selected for export to DEL format files. v PC/IXF import should be used to move data between databases. If character data containing row separators is exported to a delimited ASCII (DEL) file and processed by a text transfer program, fields containing the row separators will shrink or expand. v The file copying step is not necessary if the source and the target databases are both accessible from the same client.
436
Command Reference
EXPORT
v DB2 Connect can be used to export tables from DRDA servers such as DB2 for OS/390, DB2 for VM and VSE, and DB2 for OS/400. Only PC/IXF export is supported. v The export utility will not create multiple-part PC/IXF files when invoked from an AIX system. v The export utility will store the NOT NULL WITH DEFAULT attribute of the table in an IXF file if the SELECT statement provided is in the form SELECT * FROM tablename. v When exporting typed tables, subselect statements can only be expressed by specifying the target table name and the WHERE clause. Fullselect and select-statement cannot be specified when exporting a hierarchy. v For file formats other than IXF, it is recommended that the traversal order list be specified, because it tells DB2 how to traverse the hierarchy, and what sub-tables to export. If this list is not specified, all tables in the hierarchy are exported, and the default order is the OUTER order. The alternative is to use the default order, which is the order given by the OUTER function. v Use the same traverse order during an import operation. The load utility does not support loading hierarchies or sub-hierarchies. v When exporting data from a table that has protected rows, the LBAC credentials held by the session authorization id might limit the rows that are exported. Rows that the session authorization ID does not have read access to will not be exported. No error or warning is given. v If the LBAC credentials held by the session authorization id do not allow reading from one or more protected columns included in the export then the export fails and an error (SQLSTATE 42512) is returned. v Export packages are bound using DATETIME ISO format, thus, all date/time/timestamp values are converted into ISO format when cast to a string representation. Since the CLP packages are bound using DATETIME LOC format (locale specific format), you may see inconsistant behaviour between CLP and export if the CLP DATETIME format is different from ISO. For instance, the following SELECT statement may return expected results:
db2 select col2 from tab1 where char(col2)=05/10/2005; COL2 ---------05/10/2005 05/10/2005 05/10/2005 3 record(s) selected.
But an export command using the same select clause will not:
db2 export to test.del of del select col2 from test where char(col2)=05/10/2005; Number of rows exported: 0
Now, replacing the LOCALE date format with ISO format gives the expected results:
db2 export to test.del of del select col2 from test where char(col2)=2005-05-10; Number of rows exported: 3
Related concepts: v Export Overview in Data Movement Utilities Guide and Reference v Privileges, authorities and authorization required to use export in Data Movement Utilities Guide and Reference
Chapter 3. CLP Commands
437
EXPORT
Related tasks: v Exporting data in Data Movement Utilities Guide and Reference Related reference: v ADMIN_CMD procedure Run administrative commands in Administrative SQL Routines and Views v EXPORT command using the ADMIN_CMD procedure in Administrative SQL Routines and Views v Export Sessions - CLP Examples in Data Movement Utilities Guide and Reference v LOB and XML file behavior with regard to import and export in Data Movement Utilities Guide and Reference
438
Command Reference
FORCE APPLICATION
FORCE APPLICATION
Forces local or remote users or applications off the system to allow for maintenance on a server. Attention: If an operation that cannot be interrupted (RESTORE DATABASE, for example) is forced, the operation must be successfully re-executed before the database becomes available. Scope: This command affects all database partitions that are listed in the $HOME/sqllib/db2nodes.cfg file. In a partitioned database environment, this command does not have to be issued from the coordinator database partition of the application being forced. It can be issued from any node (database partition server) in the partitioned database environment. Authorization: One of the following: v sysadm v sysctrl v sysmaint Required connection: Instance. To force users off a remote server, it is first necessary to attach to that server. If no attachment exists, this command is executed locally. Command syntax:
FORCE APPLICATION ALL , ( application-handle ) MODE ASYNC
Command parameters: APPLICATION ALL All applications will be disconnected from the database.
application-handle Specifies the agent to be terminated. List the values using the LIST APPLICATIONS command. MODE ASYNC The command does not wait for all specified users to be terminated before returning; it returns as soon as the function has been successfully issued or an error (such as invalid syntax) is discovered. This is the only mode that is currently supported. Examples:
439
FORCE APPLICATION
The following example forces two users, with application-handle values of 41408 and 55458, to disconnect from the database:
db2 force application ( 41408, 55458 )
Usage notes: The database manager remains active so that subsequent database manager operations can be handled without the need for db2start. To preserve database integrity, only users who are idling or executing interruptible database operations can be terminated. Users creating a database cannot be forced. After a FORCE has been issued, the database will still accept requests to connect. Additional forces might be required to completely force all users off. Related reference: v LIST APPLICATIONS on page 520 v ATTACH on page 344 v sqlefrce API - Force users and applications off the system in Administrative API Reference v FORCE APPLICATION command using the ADMIN_CMD procedure in Administrative SQL Routines and Views v APPLICATIONS administrative view Retrieve connected database application information in Administrative SQL Routines and Views
440
Command Reference
FOR NODE
Command parameters: FOR NODE Enter the name of a the administration node to view DAS configuration parameters there. USER username USING password If connection to the node requires user name and password, enter this information. Examples: The following is sample output from GET ADMIN CONFIGURATION:
441
DAS Administration Authority Group Name (DASADM_GROUP) = ADMINISTRATORS DAS Discovery Mode Name of the DB2 Server System Java Development Kit Installation Path DAS DAS Code Page DAS Territory Location of Contact List Execute Expired Tasks Scheduler Mode SMTP Server Tools Catalog Database Tools Catalog Database Instance Tools Catalog Database Schema Scheduler User ID (DISCOVER) = SEARCH (DB2SYSTEM) = swalkty (JDK_PATH) = e:\sqllib\java\jdk
(DAS_CODEPAGE) = 0 (DAS_TERRITORY) = 0 (CONTACT_HOST) (EXEC_EXP_TASK) (SCHED_ENABLE) (SMTP_SERVER) (TOOLSCAT_DB) (TOOLSCAT_INST) (TOOLSCAT_SCHEMA) = = = = = = = = hostA.ibm.ca NO ON smtp1.ibm.ca CCMD DB2 TOOLSCAT db2admin
Usage notes: If an error occurs, the information returned is not valid. If the configuration file is invalid, an error message is returned. The user must install the DAS again to recover. To set the configuration parameters to the default values shipped with the DAS, use the RESET ADMIN CONFIGURATION command. Related reference: v RESET ADMIN CONFIGURATION on page 663 v UPDATE ADMIN CONFIGURATION on page 756 v Configuration parameters summary in Performance Guide
442
Command Reference
DATABASE MANAGER DB MANAGER DEFAULT DBM DATABASES CONTAINERS TABLESPACES DATABASE TABLESPACE name CONTAINER name FOR tablespace-id
ON database alias
Command parameters: DATABASE MANAGER Retrieves alert settings for the database manager. DATABASES Retrieves alert settings for all databases managed by the database manager. These are the settings that apply to all databases that do not have custom settings. Custom settings are defined using the DATABASE ON database alias clause. CONTAINERS Retrieves alert settings for all table space containers managed by the database manager. These are the settings that apply to all table space containers that do not have custom settings. Custom settings are defined using the CONTAINER name ON database alias clause. TABLESPACES Retrieves alert settings for all table spaces managed by the database manager. These are the settings that apply to all table spaces that do not have custom settings. Custom settings are defined using the TABLESPACE name ON database alias clause. DEFAULT Specifies that the install defaults are to be retrieved.
Chapter 3. CLP Commands
443
db2.sort_privmem_util Yes Threshold-based 90 100 % 0 ((db2.sort_heap_allocated/sheapthres) *100); = Disabled = Enabled = = = = = = db2.mon_heap_util Yes Threshold-based 85 95 %
444
Command Reference
The following is typical output resulting from a request for configuration information:
DB2 GET ALERT CFG FOR DATABASES Alert Configuration Indicator Name Default Type Sensitivity Formula Actions Threshold or State checking Indicator Name Default Type Warning Alarm Unit Sensitivity Formula Actions Threshold or State checking Indicator Name Default Type Warning Alarm Unit Sensitivity Formula Actions Threshold or State checking Indicator Name Default Type Warning Alarm Unit Sensitivity Formula Actions Threshold or State checking Indicator Name Default Type Warning Alarm Unit Sensitivity Formula Actions = = = = = = = = = = = = = = = db.db_op_status Yes State-based 0 db.db_status; Disabled Enabled
db.sort_shrmem_util Yes Threshold-based 70 85 % 0 ((db.sort_shrheap_allocated/sheapthres_shr) *100); = Disabled = Enabled db.spilled_sorts Yes Threshold-based 30 50 % 0 ((delta(db.sort_overflows,10))/ (delta(db.total_sorts,10)+1)*100); = Disabled = Enabled db.max_sort_shrmem_util Yes Threshold-based 60 30 % 0 ((db.max_shr_sort_mem/ sheapthres_shr)*100); = Disabled = Enabled = = = = = = = = db.log_util Yes Threshold-based 75 85 % 0 (db.total_log_used/ (db.total_log_used+db.total_log_available) )*100; = Disabled
Chapter 3. CLP Commands
= = = = = = = =
= = = = = = = =
445
db.locklist_util Yes Threshold-based 75 85 % 0 (db.lock_list_in_use/(locklist*4096)) *100; = Disabled = Enabled = = = = = = = = = = = = = = = = = = db.lock_escal_rate Yes Threshold-based 5 10 Lock escalations per hour 0 delta(db.lock_escals); Disabled Enabled db.apps_waiting_locks Yes Threshold-based 50 70 % 0 (db.locks_waiting/db.appls_cur_cons)*100;
446
Command Reference
Actions Threshold or State checking Indicator Name Default Type Warning Alarm Unit Sensitivity Formula Actions Threshold or State checking Indicator Name Default Type Warning Alarm Unit Sensitivity Formula
= = = = = = = =
= = = = = = = =
Actions Threshold or State checking Indicator Name Default Type Warning Alarm Unit Sensitivity Formula Actions Threshold or State checking Indicator Name Default Type Sensitivity Actions Threshold or State checking Indicator Name Default Type Sensitivity Formula Actions Threshold or State checking Indicator Name Default Type Warning Alarm Unit Sensitivity Formula
= = = = = = = =
447
Actions Threshold or State checking Indicator Name Default Type Sensitivity Actions Threshold or State checking Indicator Name Default Type Sensitivity Actions Threshold or State checking Indicator Name Default Type Sensitivity Actions Threshold or State checking Indicator Name Default Type Sensitivity Actions Threshold or State checking
Related tasks: v Configuring health indicators using a client application in System Monitor Guide and Reference Related reference: v db2GetAlertCfg API - Get the alert configuration settings for the health indicators in Administrative API Reference v HEALTH_GET_ALERT_ACTION_CFG table function Retrieve health alert action configuration settings in Administrative SQL Routines and Views v HEALTH_GET_ALERT_CFG table function Retrieve health alert configuration settings in Administrative SQL Routines and Views
448
Command Reference
GET AUTHORIZATIONS
GET AUTHORIZATIONS
Reports the authorities of the current user from values found in the database configuration file and the authorization system catalog view (SYSCAT.DBAUTH). Authorization: None Required connection: Database. If implicit connect is enabled, a connection to the default database is established. Command syntax:
GET AUTHORIZATIONS
Command parameters: None Examples: The following is sample output from GET AUTHORIZATIONS:
Administrative Authorizations for Current User Direct Direct Direct Direct Direct Direct Direct Direct Direct Direct Direct Direct SYSADM authority SYSCTRL authority SYSMAINT authority DBADM authority CREATETAB authority BINDADD authority CONNECT authority CREATE_NOT_FENC authority IMPLICIT_SCHEMA authority LOAD authority QUIESCE_CONNECT authority CREATE_EXTERNAL_ROUTINE authority SYSADM authority SYSCTRL authority SYSMAINT authority DBADM authority CREATETAB authority BINDADD authority CONNECT authority CREATE_NOT_FENC authority IMPLICIT_SCHEMA authority LOAD authority QUIESCE_CONNECT authority CREATE_EXTERNAL_ROUTINE authority = = = = = = = = = = = = = = = = = = = = = = = = NO NO NO YES YES YES YES YES YES YES YES YES YES NO NO NO YES YES YES NO YES NO NO NO
Indirect Indirect Indirect Indirect Indirect Indirect Indirect Indirect Indirect Indirect Indirect Indirect
Usage notes: v Direct authorities are acquired by explicit commands that grant the authorities to a user ID. Indirect authorities are based on authorities acquired by the groups to which a user belongs. (All users belong to a special group called PUBLIC) v The GET AUTHORIZATIONS command does not display whether or not the current user holds SECADM authority. To find out who holds SECADM authority, use the following query:
449
GET AUTHORIZATIONS
SELECT GRANTEE FROM SYSCAT.DBAUTH WHERE SECURITYADMAUTH = Y
Related concepts: v Authorization in Administration Guide: Planning Related reference: v SYSCAT.DBAUTH catalog view in SQL Reference, Volume 1
450
Command Reference
AT GLOBAL LEVEL
FOR SECTION
section-name
Command parameters: AT GLOBAL LEVEL Displays the default CLI configuration parameters in the LDAP directory. This parameter is only valid on Windows operating systems. FOR SECTION section-name Name of the section whose keywords are to be listed. If not specified, all sections are listed. Examples: The following sample output represents the contents of a db2cli.ini file that has two sections:
[tstcli1x] uid=userid pwd=password autocommit=0 TableType="TABLE,VIEW,SYSTEM TABLE" [tstcli2x] SchemaList="OWNER1,OWNER2,CURRENT SQLID"
Usage notes: The section name specified on this command is not case sensitive. For example, if the section name in the db2cli.ini file (delimited by square brackets) is in lowercase, and the section name specified on the command is in uppercase, the correct section will be listed.
451
452
Command Reference
Command parameters: None Examples: The following is sample output from GET CONNECTION STATE:
Database Connection State Connection state Connection mode Local database alias Database name Hostname Service name = = = = = = Connectable and Connected SHARE SAMPLE SAMPLE montero 29384
Usage notes: This command does not apply to type 2 connections. Related reference: v SET CLIENT on page 716 v UPDATE ALTERNATE SERVER FOR DATABASE on page 762
453
GET CONTACTGROUP
GET CONTACTGROUP
Returns the contacts included in a single contact group that is defined on the local system. A contact is a user to whom the Scheduler and Health Monitor send messages. You create named groups of contacts with the ADD CONTACTGROUP command. Authorization: None. Required connection: None. Local execution only: this command cannot be used with a remote connection. Command syntax:
GET CONTACTGROUP name
Command parameters: CONTACTGROUP name The name of the group for which you would like to retrieve the contacts. Examples: GET CONTACTGROUP support
Description ------------Foo Widgets broadloom support unit Name ------------joe support joline Type -------------contact contact group contact
Related reference: v db2GetContactGroup API - Get the list of contacts in a single contact group to whom notification messages can be sent in Administrative API Reference v CONTACTGROUPS administrative view Retrieve the list of contact groups in Administrative SQL Routines and Views
454
Command Reference
GET CONTACTGROUPS
GET CONTACTGROUPS
The command provides a list of contact groups, which can be either defined locally on the system or in a global list. A contact group is a list of addresses to which monitoring processes such as the Scheduler and Health Monitor can send messages. The setting of the Database Administration Server (DAS) contact_host configuration parameter determines whether the list is local or global. You create named groups of contacts with the ADD CONTACTGROUP command. Authorization: None Required Connection: None Command Syntax:
GET CONTACTGROUPS
Command Parameters: None Examples: In the following example, the command GET CONTACTGROUPS is issued. The result is as follows:
Name --------support service Description -------------Foo Widgets broadloom support unit Foo Widgets service and support unit
Related reference: v db2GetContactGroups API - Get the list of contact groups to whom notification messages can be sent in Administrative API Reference v CONTACTGROUPS administrative view Retrieve the list of contact groups in Administrative SQL Routines and Views
455
GET CONTACTS
GET CONTACTS
Returns the list of contacts defined on the local system. Contacts are users to whom the monitoring processes such as the Scheduler and Health Monitor send notifications or messages. To create a contact, use the ADD CONTACT command. Authorization: None. Required connection: None. Command syntax:
GET CONTACTS
Related reference: v db2GetContacts API - Get the list of contacts to whom notification messages can be sent in Administrative API Reference v CONTACTS administrative view Retrieve list of contacts in Administrative SQL Routines and Views
456
Command Reference
FOR
database-alias
SHOW DETAIL
Command parameters: FOR database-alias Specifies the alias of the database whose configuration is to be displayed. You do not need to specify the alias if a connection to the database already exists. SHOW DETAIL Displays detailed information showing the current value of database configuration parameters as well as the value of the parameters the next time you activate the database. This option lets you see the result of dynamic changes to configuration parameters. Examples: Notes: 1. Output on different platforms might show small variations reflecting platform-specific parameters. 2. Parameters with keywords enclosed by parentheses can be changed by the UPDATE DATABASE CONFIGURATION command. 3. Fields that do not contain keywords are maintained by the database manager and cannot be updated. The following is sample output from GET DATABASE CONFIGURATION (issued on AIX):
Database Configuration for Database mick Database configuration release level = 0x0a00
Chapter 3. CLP Commands
457
Default query optimization class (DFT_QUERYOPT) Degree of parallelism (DFT_DEGREE) Continue upon arithmetic exceptions (DFT_SQLMATHWARN) Default refresh age (DFT_REFRESH_AGE) Default maintained table types for opt (DFT_MTTB_TYPES) Number of frequent values retained (NUM_FREQVALUES) Number of quantiles retained (NUM_QUANTILES) Backup pending Database is consistent Rollforward pending Restore pending Multi-page file allocation enabled Log retain for recovery status User exit for logging status Data Data Data Data Data Data Links Links Links Links Links Links Token Expiry Interval (sec) (DL_EXPINT) Write Token Init Expiry Intvl(DL_WT_IEXPINT) Number of Copies (DL_NUM_COPIES) Time after Drop (days) (DL_TIME_DROP) Token in Uppercase (DL_UPPER) Token Algorithm (DL_TOKEN)
Database heap (4KB) (DBHEAP) = 1200 Size of database shared memory (4KB) (DATABASE_MEMORY) = AUTOMATIC Catalog cache size (4KB) (CATALOGCACHE_SZ) = 64 Log buffer size (4KB) (LOGBUFSZ) = 8 Utilities heap size (4KB) (UTIL_HEAP_SZ) = 5000 Buffer pool size (pages) (BUFFPAGE) = 1000 Max storage for lock list (4KB) (LOCKLIST) = 128 Max size of appl. group mem set (4KB) (APPGROUP_MEM_SZ) = 30000 Percent of mem for appl. group heap (GROUPHEAP_RATIO) = 70 Max appl. control heap size (4KB) (APP_CTL_HEAP_SZ) = 128 Sort heap thres for shared sorts (4KB) (SHEAPTHRES_SHR) = (SHEAPTHRES) Sort list heap (4KB) (SORTHEAP) = 256 SQL statement heap (4KB) (STMTHEAP) = 2048 Default application heap (4KB) (APPLHEAPSZ) = 128 Package cache size (4KB) (PCKCACHESZ) = (MAXAPPLS*8) Statistics heap size (4KB) (STAT_HEAP_SZ) = 4384 Interval for checking deadlock (ms) Percent. of lock lists per application Lock timeout (sec) Changed pages threshold Number of asynchronous page cleaners Number of I/O servers Index sort flag Sequential detect flag (DLCHKTIME) = 10000 (MAXLOCKS) = 10 (LOCKTIMEOUT) = -1 (CHNGPGS_THRESH) (NUM_IOCLEANERS) (NUM_IOSERVERS) (INDEXSORT) (SEQDETECT) = = = = = 60 1 3 YES YES
458
Command Reference
Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = First active log file = Block log on disk full (BLK_LOG_DSK_FUL) = NO Percent of max primary log space by transaction(MAX_LOG)= 0 Num. of active log files for 1 active UOW(NUM_LOG_SPAN) = 0 Group commit count (MINCOMMIT) Percent log file reclaimed before soft chckpt (SOFTMAX) Log retain for recovery enabled (LOGRETAIN) User exit for logging enabled (USEREXIT) HADR HADR HADR HADR HADR HADR HADR HADR = = = = 1 100 OFF OFF
database role = STANDARD local host name (HADR_LOCAL_HOST) = local service name (HADR_LOCAL_SVC) = remote host name (HADR_REMOTE_HOST) = remote service name (HADR_REMOTE_SVC) = instance name of remote server (HADR_REMOTE_INST) = timeout value (HADR_TIMEOUT) = 120 log write synchronization mode (HADR_SYNCMODE) = NEARSYNC OFF OFF 5 20 ON SYSTEM (RESTART) OFF 1 12 366
First log archive method (LOGARCHMETH1) = Options for logarchmeth1 (LOGARCHOPT1) = Second log archive method (LOGARCHMETH2) = Options for logarchmeth2 (LOGARCHOPT2) = Failover log archive path (FAILARCHPATH) = Number of log archive retries on error (NUMARCHRETRY) = Log archive retry Delay (secs) (ARCHRETRYDELAY) = Vendor options (VENDOROPT) = Auto restart enabled Index re-creation time and redo index Log pages during index build Default number of loadrec sessions Number of database backups to retain Recovery history retention (days) TSM TSM TSM TSM management class node name owner password (AUTORESTART) build (INDEXREC) (LOGINDEXBUILD) (DFT_LOADREC_SES) (NUM_DB_BACKUPS) (REC_HIS_RETENTN) (TSM_MGMTCLASS) (TSM_NODENAME) (TSM_OWNER) (TSM_PASSWORD) (AUTO_MAINT) (AUTO_DB_BACKUP) (AUTO_TBL_MAINT) (AUTO_RUNSTATS) (AUTO_STATS_PROF) (AUTO_PROF_UPD) (AUTO_REORG) = = = = = = = = = = = = = = = = =
Automatic maintenance Automatic database backup Automatic table maintenance Automatic runstats Automatic statistics profiling Automatic profile updates Automatic reorganization
459
Database configuration release level Database release level Database territory Database code page Database code set Database country/region code Database collating sequence Alternate collating sequence (ALT_COLLATE) Database page size Dynamic SQL Query management (DYN_QUERY_MGMT) Discovery support for this database (DISCOVER_DB) Default query optimization class (DFT_QUERYOPT) Degree of parallelism (DFT_DEGREE) Continue upon arithmetic exceptions (DFT_SQLMATHWARN) Default refresh age (DFT_REFRESH_AGE) Default maintained table types for opt (DFT_MTTB_TYPES) Number of frequent values retained (NUM_FREQVALUES) Number of quantiles retained (NUM_QUANTILES) Backup pending Database is consistent Rollforward pending Restore pending Multi-page file allocation enabled Log retain for recovery status User exit for logging status Data Data Data Data Data Data Links Links Links Links Links Links Token Expiry Interval (sec) (DL_EXPINT) Write Token Init Expiry Intvl(DL_WT_IEXPINT) Number of Copies (DL_NUM_COPIES) Time after Drop (days) (DL_TIME_DROP) Token in Uppercase (DL_UPPER) Token Algorithm (DL_TOKEN)
= NO = YES = NO = NO = YES = NO = NO = = = = = = 60 60 1 1 NO MAC0 60 60 1 1 NO MAC0 1200 AUTOMATIC (11516) 64 8 5000 1000 128 30000 70 128 (SHEAPTHRES) 256 2048 128 (MAXAPPLS*8) 4384 10000 10
Database heap (4KB) (DBHEAP) = 1200 Size of database shared memory (4KB) (DATABASE_MEMORY) = AUTOMATIC (11516) Catalog cache size (4KB) (CATALOGCACHE_SZ) = 64 Log buffer size (4KB) (LOGBUFSZ) = 8 Utilities heap size (4KB) (UTIL_HEAP_SZ) = 5000 Buffer pool size (pages) (BUFFPAGE) = 1000 Max storage for lock list (4KB) (LOCKLIST) = 128 Max size of appl. group mem set (4KB) (APPGROUP_MEM_SZ) Percent of mem for appl. group heap (GROUPHEAP_RATIO) Max appl. control heap size (4KB) (APP_CTL_HEAP_SZ) Sort heap thres for shared sorts (4KB) (SHEAPTHRES_SHR) Sort list heap (4KB) (SORTHEAP) SQL statement heap (4KB) (STMTHEAP) Default application heap (4KB) (APPLHEAPSZ) Package cache size (4KB) (PCKCACHESZ) Statistics heap size (4KB) (STAT_HEAP_SZ) Interval for checking deadlock (ms) Percent. of lock lists per application = = = = = = = = =
460
Command Reference
Changed pages threshold (CHNGPGS_THRESH) Number of asynchronous page cleaners (NUM_IOCLEANERS) Number of I/O servers (NUM_IOSERVERS) Index sort flag (INDEXSORT) Sequential detect flag (SEQDETECT) Default prefetch size (pages) (DFT_PREFETCH_SZ) Track modified pages Default number of containers Default tablespace extentsize (pages) Max number of active applications Average number of active applications Max DB files open per application Log file size (4KB) Number of primary log files Number of secondary log files Changed path to log files Path to log files
(TRACKMOD) = NO = 1 (DFT_EXTENT_SZ) = 32 (MAXAPPLS) = AUTOMATIC (40) (AVG_APPLS) = 1 (MAXFILOP) = 64 (LOGFILSIZ) (LOGPRIMARY) (LOGSECOND) (NEWLOGPATH) = = = = = 1000 3 2
home/db2inst /home /db2inst /db2inst /NODE0000 /db2inst /SQL00001 /NODE0000 /SQLOGDIR/ /SQL00001 /SQLOGDIR/
Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = First active log file = Block log on disk full (BLK_LOG_DSK_FUL) = Percent of max primary log space by transaction(MAX_LOG)= Num. of active log files for 1 active UOW(NUM_LOG_SPAN) = Group commit count (MINCOMMIT) = Percent log file reclaimed before soft chckpt (SOFTMAX) = Log retain for recovery enabled (LOGRETAIN) = User exit for logging enabled (USEREXIT) = HADR HADR HADR HADR HADR HADR HADR HADR
database role = STANDARD local host name (HADR_LOCAL_HOST) = local service name (HADR_LOCAL_SVC) = remote host name (HADR_REMOTE_HOST) = remote service name (HADR_REMOTE_SVC) = instance name of remote server (HADR_REMOTE_INST) = timeout value (HADR_TIMEOUT) = 120 log write synchronization mode (HADR_SYNCMODE) = NEARSYNC = = = = = = = = = = OFF OFF 5 20 ON SYSTEM (RESTART) OFF 1 12 366
First log archive method (LOGARCHMETH1) Options for logarchmeth1 (LOGARCHOPT1) Second log archive method (LOGARCHMETH2) Options for logarchmeth2 (LOGARCHOPT2) Failover log archive path (FAILARCHPATH) Number of log archive retries on error (NUMARCHRETRY) Log archive retry Delay (secs) (ARCHRETRYDELAY) Vendor options (VENDOROPT) Auto restart enabled (AUTORESTART) Index re-creation time and redo index build (INDEXREC)
Log pages during index build (LOGINDEXBUILD) = Default number of loadrec sessions (DFT_LOADREC_SES) = Number of database backups to retain (NUM_DB_BACKUPS) = Recovery history retention (days) (REC_HIS_RETENTN) = TSM management class TSM node name (TSM_MGMTCLASS) = (TSM_NODENAME) =
461
Usage notes: If an error occurs, the information returned is not valid. If the configuration file is invalid, an error message is returned. The database must be restored from a backup version. To set the database configuration parameters to the database manager defaults, use the RESET DATABASE CONFIGURATION command. To retrieve information from all database partitions, use the SYSIBMADM.DBCFG administrative view. Related tasks: v Changing node and database configuration files in Administration Guide: Implementation v Configuring DB2 with configuration parameters in Performance Guide Related reference: v RESET DATABASE CONFIGURATION on page 667 v UPDATE DATABASE CONFIGURATION on page 772 v db2CfgGet API - Get the database manager or database configuration parameters in Administrative API Reference v Configuration parameters summary in Performance Guide v DBCFG administrative view Retrieve database configuration parameter information in Administrative SQL Routines and Views
462
Command Reference
SHOW DETAIL
Command parameters: SHOW DETAIL Displays detailed information showing the current value of database manager configuration parameters as well as the value of the parameters the next time you start the database manager. This option lets you see the result of dynamic changes to configuration parameters. Examples: Both node type and platform determine which configuration parameters are listed. The following is sample output from GET DATABASE MANAGER CONFIGURATION (issued on AIX):
Database Manager Configuration Node type = Database Server with local clients Database manager configuration release level CPU speed (millisec/instruction) = 0x0a00 (CPUSPEED) = 4.000000e-05 = 8 = NO = NO =
Max number of concurrently active databases (NUMDB) Data Links support (DATALINKS) Federated Database System Support (FEDERATED) Transaction processor monitor name (TP_MON_NAME) Default charge-back account Java Development Kit installation path Diagnostic error capture level Notify Level Diagnostic data directory path Default database monitor switches Buffer pool Lock Sort
(DFT_ACCOUNT_STR) = (JDK_PATH) = /usr/java131 (DIAGLEVEL) = 3 (NOTIFYLEVEL) = 3 (DIAGPATH) = (DFT_MON_BUFPOOL) = OFF (DFT_MON_LOCK) = OFF (DFT_MON_SORT) = OFF
Chapter 3. CLP Commands
463
Client Userid-Password Plugin (CLNT_PW_PLUGIN) Client Kerberos Plugin (CLNT_KRB_PLUGIN) Group Plugin (GROUP_PLUGIN) GSS Plugin for Local Authorization (LOCAL_GSSPLUGIN) Server Plugin Mode (SRV_PLUGIN_MODE) Server List of GSS Plugins (SRVCON_GSSPLUGIN_LIST) Server Userid-Password Plugin (SRVCON_PW_PLUGIN) Server Connection Authentication (SRVCON_AUTH) Database manager authentication (AUTHENTICATION) Cataloging allowed without authority (CATALOG_NOAUTH) Trust all clients (TRUST_ALLCLNTS) Trusted client authentication (TRUST_CLNTAUTH) Bypass federated authentication (FED_NOAUTH) Default database path
(DFTDBPATH) = /home/db2inst
Database monitor heap size (4KB) (MON_HEAP_SZ) = 90 Java Virtual Machine heap size (4KB) (JAVA_HEAP_SZ) = 512 Audit buffer size (4KB) (AUDIT_BUF_SZ) = 0 Size of instance shared memory (4KB) (INSTANCE_MEMORY) = AUTOMATIC Backup buffer default size (4KB) (BACKBUFSZ) = 1024 Restore buffer default size (4KB) (RESTBUFSZ) = 1024 Sort heap threshold (4KB) Directory cache support (SHEAPTHRES) = 20000 (DIR_CACHE) = YES
Application support layer heap size (4KB) (ASLHEAPSZ) = 15 Max requester I/O block size (bytes) (RQRIOBLK) = 32767 Query heap size (4KB) (QUERY_HEAP_SZ) = 1000 Workload impact by throttled utilities(UTIL_IMPACT_LIM) = 10 Priority of agents (AGENTPRI) Max number of existing agents (MAXAGENTS) Agent pool size (NUM_POOLAGENTS) Initial number of agents in pool (NUM_INITAGENTS) Max number of coordinating agents (MAX_COORDAGENTS) Max no. of concurrent coordinating agents (MAXCAGENTS) Max number of client connections (MAX_CONNECTIONS) Keep fenced process Number of pooled fenced processes Initial number of fenced processes = = = = = = = SYSTEM 200 100(calculated) 0 MAXAGENTS MAX_COORDAGENTS MAX_COORDAGENTS
Index re-creation time and redo index build (INDEXREC) = RESTART Transaction manager database name Transaction resync interval (sec) SPM SPM SPM SPM name log size resync agent limit log path (TM_DATABASE) = 1ST_CONN (RESYNC_INTERVAL) = 180 (SPM_NAME) (SPM_LOG_FILE_SZ) (SPM_MAX_RESYNC) (SPM_LOG_PATH) = = 256 = 20 =
TCP/IP Service name Discovery mode Discover server instance Maximum query degree of parallelism Enable intra-partition parallelism
464
Command Reference
No. of int. communication buffers(4KB)(FCM_NUM_BUFFERS) = AUTOMATIC No. of int. communication channels (FCM_NUM_CHANNELS) = AUTOMATIC
The following output sample shows the information displayed when you specify the WITH DETAIL option. The value that appears in the Delayed Value is the value that will be in effect the next time you start the database manager instance.
Database Manager Configuration Node type = Database Server with local clients Description Parameter Database manager configuration release level CPU speed (millisec/instruction) Current Value Delayed Value = 0x0a00 4.000000e -05 8 NO NO
(CPUSPEED) = 4.000000e -05 Max number of concurrently active databases (NUMDB) = 8 Data Links support (DATALINKS) = NO Federated Database System Support (FEDERATED) = NO Transaction processor monitor name (TP_MON_NAME) = Default charge-back account Java Development Kit installation path (DFT_ACCOUNT_STR) =
(JDK_PATH) = /wsdb/v81 /usr /bldsupp /java131 /AIX/jdk1.3.1 (DIAGLEVEL) = 3 (NOTIFYLEVEL) = 3 (DIAGPATH) = = = = = = = = = OFF OFF OFF OFF OFF ON OFF ON 3 3
Diagnostic error capture level Notify Level Diagnostic data directory path
Default database monitor switches Buffer pool (DFT_MON_BUFPOOL) Lock (DFT_MON_LOCK) Sort (DFT_MON_SORT) Statement (DFT_MON_STMT) Table (DFT_MON_TABLE) Timestamp (DFT_MON_TIMESTAMP) Unit of work (DFT_MON_UOW) Monitor health of instance and databases (HEALTH_MON) SYSADM group name SYSCTRL group name SYSMAINT group name SYSMON group name Client Userid-Password Plugin Client Kerberos Plugin Group Plugin GSS Plugin for Local Authorization Server Plugin Mode Server List of GSS Plugins Server Userid-Password Plugin Server Connection Authentication Database manager authentication Cataloging allowed without authority Trust all clients Trusted client authentication Bypass federated authentication Default database path (SYSADM_GROUP) (SYSCTRL_GROUP) (SYSMAINT_GROUP) (SYSMON_GROUP)
= BUILD = = =
(CLNT_PW_PLUGIN) = (CLNT_KRB_PLUGIN) = (GROUP_PLUGIN) = (LOCAL_GSSPLUGIN) = (SRV_PLUGIN_MODE) = UNFENCED (SRVCON_GSSPLUGIN_LIST) = (SRVCON_PW_PLUGIN) = (SRVCON_AUTH) = NOT_ NOT_ SPECIFIED SPECIFIED (AUTHENTICATION) = SERVER SERVER (CATALOG_NOAUTH) = YES YES (TRUST_ALLCLNTS) = YES YES (TRUST_CLNTAUTH) = CLIENT CLIENT (FED_NOAUTH) = NO NO (DFTDBPATH) = /home /db2inst /home /db2inst UNFENCED
465
Database monitor heap size (4KB) (MON_HEAP_SZ) Java Virtual Machine heap size (4KB) (JAVA_HEAP_SZ) Audit buffer size (4KB) (AUDIT_BUF_SZ) Size of instance shared memory (4KB) (INSTANCE_MEMORY) Backup buffer default size (4KB) Restore buffer default size (4KB) Sort heap threshold (4KB) Directory cache support
= = = =
90 512 0 AUTOMATIC (20) 1024 1024 20000 YES 15 32767 1000 10 SYSTEM 200 100 (calculated) 0 MAXAGENTS MAX_COORDAGENTS MAX_COORDAGENTS YES MAX_ COORDAGENTS 0 RESTART 1ST_CONN 180 256 20
Application support layer heap size (4KB) (ASLHEAPSZ) = 15 Max requester I/O block size (bytes) (RQRIOBLK) = 32767 Query heap size (4KB) (QUERY_HEAP_SZ) = 1000 Workload impact by throttled utilities(UTIL_IMPACT_LIM) = 10 Priority of agents Max number of existing agents Agent pool size (AGENTPRI) = SYSTEM (MAXAGENTS) = 200 (NUM_POOLAGENTS) = 100 = = = = 0 200 200 200
Initial number of agents in pool (NUM_INITAGENTS) Max number of coordinating agents (MAX_COORDAGENTS) Max no. of concurrent coordinating agents (MAXCAGENTS) Max number of client connections (MAX_CONNECTIONS) Keep fenced process Number of pooled fenced processes Initial number of fenced processes
Index re-creation time and redo index build (INDEXREC) = RESTART Transaction manager database name Transaction resync interval (sec) SPM SPM SPM SPM name log size resync agent limit log path (TM_DATABASE) = 1ST_CONN (RESYNC_INTERVAL) = 180 (SPM_NAME) (SPM_LOG_FILE_SZ) (SPM_MAX_RESYNC) (SPM_LOG_PATH) = = 256 = 20 =
TCP/IP Service name Discovery mode Discover server instance Maximum query degree of parallelism Enable intra-partition parallelism
No. of int. communication buffers(4KB)(FCM_NUM_BUFFERS) = AUTOMATIC No. of int. communication channels (FCM_NUM_CHANNELS) = AUTOMATIC
Usage notes: v If an attachment to a remote instance or a different local instance exists, the database manager configuration parameters for the attached server are returned; otherwise, the local database manager configuration parameters are returned. v If an error occurs, the information returned is invalid. If the configuration file is invalid, an error message is returned. The user must install the database manager again to recover. v To set the configuration parameters to the default values shipped with the database manager, use the RESET DATABASE MANAGER CONFIGURATION command. v The AUTOMATIC values indicated on get database manager configuration show detail for FCM_NUM_BUFFERS and FCM_NUM_CHANNELS are the initial values at
466
Command Reference
467
AT DBPARTITIONNUM GLOBAL
db-partition-number
Command parameters: AT DBPARTITIONNUM db-partition-number Specifies the database partition for which the status of the database manager monitor switches is to be displayed. GLOBAL Returns an aggregate result for all database partitions in a partitioned database system. Examples: The following is sample output from GET DATABASE MANAGER MONITOR SWITCHES:
468
Command Reference
Usage notes: The recording switches BUFFERPOOL, LOCK, SORT, STATEMENT, TABLE, and UOW are off by default, but can be switched on using the UPDATE MONITOR SWITCHES command. If any of these switches are on, this command also displays the time stamp for when the switch was turned on. The recording switch TIMESTAMP is on by default, but can be switched off using UPDATE MONITOR SWITCHES. When this switch is on the system issues timestamp calls when collecting information for timestamp monitor elements. Examples of these elements are: v agent_sys_cpu_time v agent_usr_cpu_time v appl_con_time v con_elapsed_time v v v v v v v v v v v v v v v v v v v v v v v v con_response_time conn_complete_time db_conn_time elapsed_exec_time gw_comm_error_time gw_con_time gw_exec_time host_response_time last_backup last_reset lock_wait_start_time network_time_bottom network_time_top prev_uow_stop_time rf_timestamp ss_sys_cpu_time ss_usr_cpu_time status_change_time stmt_elapsed_time stmt_start stmt_stop stmt_sys_cpu_time stmt_usr_cpu_time uow_elapsed_time
Chapter 3. CLP Commands
469
470
Command Reference
Command parameters: HEALTH INDICATOR shortname The name of the health indicator for which you would like to retrieve the description. Health indicator names consist of a two- or three-letter object identifier followed by a name which describes what the indicator measures. For example:
db.sort_privmem_util
Examples: The following is sample output from the GET DESCRIPTION FOR HEALTH INDICATOR command.
GET DESCRIPTION FOR HEALTH INDICATOR db2.sort_privmem_util DESCRIPTION FOR db2.sort_privmem_util Sorting is considered healthy if there is sufficient heap space in which to perform sorting and sorts do not overflow unnecessarily. This indicator tracks the utilization of the private sort memory. If db2.sort_heap_allocated (system monitor data element) >= SHEAPTHRES (DBM configuration parameter), sorts may not be getting full sort heap as defined by the SORTHEAP parameter and an alert may be generated. The indicator is calculated using the formula: (db2.sort_heap_allocated / SHEAPTHRES) * 100. The Post Threshold Sorts snapshot monitor element measures the number of sorts that have requested heaps after the sort heap threshold has been exceeded. The value of this indicator, shown in the Additional Details, indicates the degree of severity of the problem for this health indicator. The Maximum Private Sort Memory Used snapshot monitor element maintains a private sort memory high-water mark for the instance. The value of this indicator, shown in the Additional Information, indicates the maximum amount of private sort memory that has been in use at any one point in time since the instance was last recycled. This value can be used to help determine an appropriate value for SHEAPTHRES.
Related reference:
Chapter 3. CLP Commands
471
472
Command Reference
Command Parameters: None. Examples: Issuing the command GET NOTIFICATION LIST results in a report similar to the following:
Name -----------------------------Joe Brown Support Type ------------Contact Contact group
Related reference: v db2GetHealthNotificationList API - Get the list of contacts to whom health alert notifications can be sent in Administrative API Reference v NOTIFICATIONLIST administrative view Retrieve contact list for health notification in Administrative SQL Routines and Views
473
database alias
AT DBPARTITIONNUM GLOBAL
db partition number
SHOW DETAIL
Command parameters: DATABASE MANAGER Provides statistics for the active database manager instance. ALL DATABASES Provides health states for all active databases on the current database partition. ALL ON database-alias Provides health states and information about all table spaces and buffer pools for a specified database. DATABASE ON database-alias
474
Command Reference
Collection object history is returned for all collection objects in ATTENTION or AUTOMATE FAILED state. The SHOW DETAIL option also provides additional contextual information that can be useful to understanding the value and alert state of the associated Health Indicator. For example, if the table space storage utilization Health Indicator is being used to determine how full the table space is, the rate at which the table space is growing will also be provided by SHOW DETAIL. WITH FULL COLLECTION Specifies that full collection information for all collection state-based health indicators is to be returned. This option considers both the name and size filter criteria. If a user requests a health snapshot with full collection, the report will show all tables that meet the name and size criteria in the policy. This can be used to validate which tables will be evaluated in a given refresh cycle. The output returned when this option is specified is for collection objects in NORMAL, AUTOMATED, ATTENTION, or AUTOMATE FAILED state. This option can be specified in conjunction with the SHOW DETAIL option. Without this option, only tables that have been evaluated for automatic reorganization and require manual intervention (i.e. manual reorg or automation failed) will be displayed in a get health snapshot report. Examples: The following is typical output resulting from a request for database manager information:
D:\>DB2 GET HEALTH SNAPSHOT FOR DBM Database Manager Health Snapshot Node name Node type Instance name Snapshot timestamp = = Enterprise Server Edition with local and remote clients = DB2 = 02/17/2004 12:39:44.818949
Number of database partitions in DB2 instance = 1 Start Database Manager timestamp = 02/17/2004 12:17:21.000119
Chapter 3. CLP Commands
475
Related tasks: v Capturing a database health snapshot using the CLP in System Monitor Guide and Reference Related reference: v Health monitor CLP commands in System Monitor Guide and Reference v Health monitor interface mappings to logical data groups in System Monitor Guide and Reference v Health monitor sample output in System Monitor Guide and Reference
476
Command Reference
GET INSTANCE
GET INSTANCE
Returns the value of the DB2INSTANCE environment variable. Authorization: None Required connection: None Command syntax:
GET INSTANCE
Command parameters: None Examples: The following is sample output from GET INSTANCE:
The current database manager instance is: smith
Related tasks: v Setting the current instance environment variables in Administration Guide: Implementation Related reference: v sqlegins API - Get current instance in Administrative API Reference
477
Command parameters: AT DBPARTITIONNUM db-partition-number Specifies the database partition for which the status of the monitor switches is to be displayed. GLOBAL Returns an aggregate result for all database partitions in a partitioned database system. Examples: The following is sample output from GET MONITOR SWITCHES:
Monitor Recording Switches Switch list for db partition number 1 Buffer Pool Activity Information (BUFFERPOOL) Lock Information (LOCK) Sorting Information (SORT) SQL Statement Information (STATEMENT) Table Activity Information (TABLE) Take Timestamp Information (TIMESTAMP) Unit of Work Information (UOW) = = = = = = = ON OFF OFF ON OFF ON ON 02-20-2003 16:04:30.070073 02-20-2003 16:04:30.070073 02-20-2003 16:04:30.070073 02-20-2003 16:04:30.070073
Usage notes:
478
Command Reference
If the TIMESTAMP switch is off, timestamp operating system calls are not issued to determine these elements and these elements will contain zero. Turning this switch off becomes important as CPU utilization approaches 100%; when this occurs, the CPU time required for issuing timestamps increases dramatically. Compatibilities: For compatibility with versions earlier than Version 8:
Chapter 3. CLP Commands
479
480
Command Reference
FOR
ON database-alias
Command parameters: HEALTH INDICATOR health-indicator-name The name of the health indicator for which you would like to retrieve the recommendations. Health indicator names consist of a two- or three-letter object identifier followed by a name that describes what the indicator measures. DBM Returns recommendations for a database manager health indicator that has entered an alert state.
TABLESPACE Returns recommendation for a health indicator that has entered an alert state on the specified table space and database. CONTAINER Returns recommendation for a health indicator that has entered an alert state on the specified container in the specified table space and database. DATABASE Returns recommendations for a health indicator that has entered an alert state on the specified database.
481
482
Command Reference
Usage notes: The GET RECOMMENDATIONS FOR HEALTH INDICATOR command can be used in two different ways: v Specify only the health indicator to get an informational list of all possible recommendations. If no object is specified, the command will return a full listing of all recommendations that can be used to resolve an alert on the given health indicator. v Specify an object to resolve a specific alert on that object. If an object (for example, a database or a table space) is specified, the recommendations returned will be specific to an alert on the object identified. In this case, the recommendations will be more specific and will contain more information about resolving the alert. If the health indicator identified is not in an alert state on the specified object, no recommendations will be returned. Related tasks:
Chapter 3. CLP Commands
483
484
Command Reference
GET ROUTINE
GET ROUTINE
Retrieves a routine SQL Archive (SAR) file for a specified SQL routine. Authorization: dbadm Required connection: Database. If implicit connect is enabled, a connection to the default database is established. Command syntax:
GET ROUTINE INTO file_name FROM SPECIFIC PROCEDURE routine_name
HIDE BODY
Command parameters: INTO file_name Names the file where routine SQL archive (SAR) is stored. FROM Indicates the start of the specification of the routine to be retrieved. SPECIFIC The specified routine name is given as a specific name. PROCEDURE The routine is an SQL procedure. routine_name The name of the procedure. If SPECIFIC is specified then it is the specific name of the procedure. If the name is not qualified with a schema name, the CURRENT SCHEMA is used as the schema name of the routine. The routine-name must be an existing procedure that is defined as an SQL procedure. HIDE BODY Specifies that the body of the routine must be replaced by an empty body when the routine text is extracted from the catalogs. This does not affect the compiled code; it only affects the text. Examples:
GET ROUTINE INTO procs/proc1.sar FROM PROCEDURE myappl.proc1;
Usage Notes: If a GET ROUTINE or a PUT ROUTINE operation (or their corresponding procedure) fails to execute successfully, it will always return an error (SQLSTATE 38000), along with diagnostic text providing information about the cause of the failure. For example, if the procedure name provided to GET ROUTINE does not identify an SQL procedure, diagnostic -204, 42704 text will be returned, where -204 and 42704 are the SQLCODE and SQLSTATE, respectively, that identify the
Chapter 3. CLP Commands
485
GET ROUTINE
cause of the problem. The SQLCODE and SQLSTATE in this example indicate that the procedure name provided in the GET ROUTINE command is undefined. Related reference: v PUT ROUTINE on page 609 v PUT_ROUTINE_SAR procedure in Administrative SQL Routines and Views
486
Command Reference
GET SNAPSHOT
GET SNAPSHOT
Collects status information and formats the output for the user. The information returned represents a snapshot of the database manager operational status at the time the command was issued. Scope: In a partitioned database environment, this command can be invoked from any database partition defined in the db2nodes.cfg file. It acts only on that database partition. Authorization: One of the following: v v v v sysadm sysctrl sysmaint sysmon
Required connection: Instance. If there is no instance attachment, a default instance attachment is created. To obtain a snapshot of a remote instance, it is necessary to first attach to that instance. Command syntax:
GET SNAPSHOT FOR
487
GET SNAPSHOT
DATABASE MANAGER DB MANAGER DBM ALL DATABASES DCS ALL APPLICATIONS DCS ALL BUFFERPOOLS APPLICATION APPLID appl-id DCS AGENTID appl-handle FCM FOR ALL DBPARTITIONNUMS LOCKS FOR APPLICATION APPLID appl-id AGENTID appl-handle ALL REMOTE_DATABASES ALL REMOTE_APPLICATIONS DYNAMIC SQL ON database-alias WRITE TO FILE ALL ON database-alias DATABASE DCS DB APPLICATIONS DCS TABLES TABLESPACES LOCKS BUFFERPOOLS REMOTE_DATABASES REMOTE_APPLICATIONS
AT DBPARTITIONNUM GLOBAL
db-partition-number
The monitor switches must be turned on in order to collect some statistics. Command parameters: DATABASE MANAGER Provides statistics for the active database manager instance. ALL DATABASES Provides general statistics for all active databases on the current database partition. ALL APPLICATIONS Provides information about all active applications that are connected to a database on the current database partition. ALL BUFFERPOOLS Provides information about buffer pool activity for all active databases. APPLICATION APPLID appl-id Provides information only about the application whose ID is specified. To get a specific application ID, use the LIST APPLICATIONS command. APPLICATION AGENTID appl-handle Provides information only about the application whose application handle is specified. The application handle is a 32-bit number that uniquely identifies an application that is currently running. Use the LIST APPLICATIONS command to get a specific application handle.
488
Command Reference
GET SNAPSHOT
FCM FOR ALL DBPARTITIONNUMS Provides Fast Communication Manager (FCM) statistics between the database partition against which the GET SNAPSHOT command was issued and the other database partitions in the partitioned database environment. LOCKS FOR APPLICATION APPLID appl-id Provides information about all locks held by the specified application, identified by application ID. LOCKS FOR APPLICATION AGENTID appl-handle Provides information about all locks held by the specified application, identified by application handle. ALL REMOTE_DATABASES Provides general statistics about all active remote databases on the current database partition. ALL REMOTE_APPLICATIONS Provides information about all active remote applications that are connected to the current database partition. ALL ON database-alias Provides general statistics and information about all applications, tables, table spaces, buffer pools, and locks for a specified database. DATABASE ON database-alias Provides general statistics for a specified database. APPLICATIONS ON database-alias Provides information about all applications connected to a specified database. TABLES ON database-alias Provides information about tables in a specified database. This will include only those tables that have been accessed since the TABLE recording switch was turned on. TABLESPACES ON database-alias Provides information about table spaces for a specified database. LOCKS ON database-alias Provides information about every lock held by each application connected to a specified database. BUFFERPOOLS ON database-alias Provides information about buffer pool activity for the specified database. REMOTE_DATABASES ON database-alias Provides general statistics about all active remote databases for a specified database. REMOTE_APPLICATIONS ON database-alias Provides information about remote applications for a specified database. DYNAMIC SQL ON database-alias Returns a point-in-time picture of the contents of the SQL statement cache for the database. WRITE TO FILE Specifies that snapshot results are to be stored in a file at the server, as well as being passed back to the client. This command is valid only over a
489
GET SNAPSHOT
database connection. The snapshot data can then be queried through the table function SYSFUN.SQLCACHE_SNAPSHOT over the same connection on which the call was made. DCS Depending on which clause it is specified, this keyword requests statistics about: v A specific DCS application currently running on the DB2 Connect Gateway v All DCS applications v All DCS applications currently connected to a specific DCS database v A specific DCS database v All DCS databases.
AT DBPARTITIONNUM db-partition-number Returns results for the database partition specified. GLOBAL Returns an aggregate result for all database partitions in a partitioned database system. Examples: v To request snapshot information about the database manager, issue:
get snapshot for database manager
v v
To request snapshot information about a specific application with application handle 765 connected to the SAMPLE database, issue:
get snapshot for application agentid 765
To request dynamic SQL snapshot information about the SAMPLE database, issue:
get snapshot for dynamic sql on sample
Usage notes: v When write suspend is ON against a database, snapshots cannot be issued against that database until write suspend is turned OFF. When a snapshot is issued against a database for which write suspend was turned on, a diagnostic probe is written to the db2diag.log and that database is skipped. v To obtain a snapshot from a remote instance (or a different local instance), it is necessary to first attach to that instance. If an alias for a database residing at a different instance is specified, an error message is returned. v To obtain some statistics, it is necessary that the database system monitor switches are turned on. If the recording switch TIMESTAMP has been set to off, timestamp related elements will report Not Collected. v No data is returned following a request for table information if any of the following is true: The TABLE recording switch is turned off. No tables have been accessed since the switch was turned on. No tables have been accessed since the last RESET MONITOR command was issued. However, if a REORG TABLE is being performed or has been performed during this period, some information is returned although some fields are not displayed.
490
Command Reference
GET SNAPSHOT
v To obtain snapshot information from all database partitions (which is different than the aggregate result of all partitions), the snapshot administrative views should be used. Compatibilities: For compatibility with versions earlier than Version 8: v The keyword NODE can be substituted for DBPARTITIONNUM. v The keyword NODES can be substituted for DBPARTITIONNUMS. Related concepts: v Snapshot monitor in System Monitor Guide and Reference Related tasks: v Capturing a database snapshot from a client application in System Monitor Guide and Reference v Capturing database system snapshots using snapshot administrative views and table functions in System Monitor Guide and Reference Related reference: v db2GetSnapshot API - Get a snapshot of the database manager operational status in Administrative API Reference v GET MONITOR SWITCHES on page 478 v LIST APPLICATIONS on page 520 v RESET MONITOR on page 671 v UPDATE MONITOR SWITCHES on page 782
491
HELP
HELP
Permits the user to invoke help from the Information Center. This command is not available on UNIX based systems. Authorization: None Required connection: None Command syntax:
HELP character-string
Command parameters: HELP character-string Any SQL or DB2 command, or any other item listed in the Information Center. Examples: Following are examples of the HELP command: v db2 help This command opens the DB2 Information Center, which contains information about DB2 divided into categories, such as tasks, reference, books, and so on. This is equivalent to invoking the db2ic command with no parameters. v db2 help drop This command opens the Web browser, and displays information about the SQL DROP statement. This is equivalent to invoking the following command: db2ic -j drop. The db2ic command searches first the SQL Reference and then the Command Reference, for a statement or a command called DROP, and then displays the first one found. v db2 help drop database This command initiates a more refined search, and causes information about the DROP DATABASE command to be displayed. Usage notes: The Information Center must be installed on the users system. HTML books in the DB2 library must be located in the \sqllib\doc\html subdirectory. The command line processor will not know if the command succeeds or fails, and cannot report error conditions. Related tasks: v Invoking command help from the command line processor on page 326
492
Command Reference
HISTORY
Displays the history of commands run within a CLP interactive mode session. Scope This command can only be run within CLP interactive mode. Specifically, it cannot be run from the CLP command mode or the CLP batch mode. Authorization: None Required connection: None Command syntax:
HISTORY H
REVERSE R
num
Command parameters: REVERSE Displays the command history in reverse order, with the most-recently run command listed first. If this parameter is not specified, the commands are listed in chronological order, with the most recently run command listed last. num Displays only the most recent num commands. If this parameter is not specified, a maximum of 20 commands are displayed. However, the number of commands that are displayed is also restricted by the number of commands that are stored in the command history.
Usage notes: 1. The value of the DB2_CLP_HISTSIZE registry variable specifies the maximum number of commands to be stored in the command history. This registry variable can be set to any value between 1 and 500 inclusive. If this registry variable is not set or is set to a value outside the valid range, a maximum of 20 commands is stored in the command history. 2. Since the HISTORY command will always be listed in the command history, the maximum number of commands displayed will always be one greater than the user-specified maximum. 3. The command history is not persistent across CLP interactive mode sessions, which means that the command history is not saved at the end of an interactive mode session. 4. The command histories of multiple concurrently running CLP interactive mode sessions are independent of one another. Related reference: v EDIT on page 432 v RUNCMD on page 701
493
IMPORT
IMPORT
Inserts data from an external file with a supported file format into a table, hierarchy, view or nickname. LOAD is a faster alternative, but the load utility does not support loading data at the hierarchy level. Authorization: v IMPORT using the INSERT option requires one of the following: sysadm dbadm CONTROL privilege on each participating table, view, or nickname v INSERT and SELECT privilege on each participating table or view IMPORT to an existing table using the INSERT_UPDATE option, requires one of the following: sysadm dbadm CONTROL privilege on each participating table, view, or nickname INSERT, SELECT, UPDATE and DELETE privilege on each participating table or view IMPORT to an existing table using the REPLACE or REPLACE_CREATE option, requires one of the following: sysadm dbadm CONTROL privilege on the table or view INSERT, SELECT, and DELETE privilege on the table or view IMPORT to a new table using the CREATE or REPLACE_CREATE option, requires one of the following: sysadm dbadm CREATETAB authority on the database and USE privilege on the table space, as well as one of: - IMPLICIT_SCHEMA authority on the database, if the implicit or explicit schema name of the table does not exist - CREATIN privilege on the schema, if the schema name of the table refers to an existing schema IMPORT to a hierarchy that does not exist using the CREATE, or the REPLACE_CREATE option, requires one of the following: sysadm dbadm CREATETAB authority on the database and USE privilege on the table space and one of: - IMPLICIT_SCHEMA authority on the database, if the schema name of the table does not exist - CREATEIN privilege on the schema, if the schema of the table exists
- CONTROL privilege on every sub-table in the hierarchy, if the REPLACE_CREATE option on the entire hierarchy is used v IMPORT to an existing hierarchy using the REPLACE option requires one of the following:
494
Command Reference
IMPORT
sysadm dbadm CONTROL privilege on every sub-table in the hierarchy v To import data into a table that has protected columns, the session authorization ID must have LBAC credentials that allow write access to all protected columns in the table. Otherwise the import fails and an error (SQLSTATE 42512) is returned. v To import data into a table that has protected rows, the session authorization ID must hold LBAC credentials that meets these criteria: It is part of the security policy protecting the table It was granted to the session authorization ID for write access The label on the row to insert, the users LBAC credentials, the security policy definition, and the LBAC rules determine determine the label on the row. v If the REPLACE or REPLACE_CREATE option is specified, the session authorization ID must have the authority to drop the table. Required connection: Database. If implicit connect is enabled, a connection to the default database is established. Utility access to Linux, UNIX, or Windows database servers from Linux, UNIX, or Windows clients must be a direct connection through the engine and not through a DB2 Connect gateway or loop back environment. Command syntax:
IMPORT FROM filename OF filetype , LOBS FROM lob-path XML FROM , xml-path
MODIFIED BY
filetype-mod
P (
XMLPARSE
STRIP PRESERVE
WHITESPACE
ALLOW NO ACCESS XMLVALIDATE USING XDS DEFAULT schema-sqlid SCHEMA schema-sqlid SCHEMALOCATION HINTS Ignore and Map parameters ALLOW WRITE ACCESS
COMMITCOUNT
n AUTOMATIC
RESTARTCOUNT SKIPCOUNT
ROWCOUNT
WARNINGCOUNT
NOTIMEOUT
495
IMPORT
MESSAGES message-file
INTO
table-name
, IGNORE ( schema-sqlid )
hierarchy description:
ALL TABLES sub-table-list IN
HIERARCHY
sub-table-list:
, ( sub-table-name , ( insert-column ) )
traversal-order-list:
, ( sub-table-name )
tblspace-specs:
Command parameters: ALL TABLES An implicit keyword for hierarchy only. When importing a hierarchy, the default is to import all tables specified in the traversal order.
496
Command Reference
IMPORT
ALLOW NO ACCESS Runs import in the offline mode. An exclusive (X) lock on the target table is acquired before any rows are inserted. This prevents concurrent applications from accessing table data. This is the default import behavior. ALLOW WRITE ACCESS Runs import in the online mode. An intent exclusive (IX) lock on the target table is acquired when the first row is inserted. This allows concurrent readers and writers to access table data. Online mode is not compatible with the REPLACE, CREATE, or REPLACE_CREATE import options. Online mode is not supported in conjunction with buffered inserts. The import operation will periodically commit inserted data to prevent lock escalation to a table lock and to avoid running out of active log space. These commits will be performed even if the COMMITCOUNT option was not used. During each commit, import will lose its IX table lock, and will attempt to reacquire it after the commit. This parameter is required when you import to a nickname and COMMITCOUNT must be specified with a valid number (AUTOMATIC is not considered a valid option). AS ROOT TABLE Creates one or more sub-tables as a stand-alone table hierarchy. COMMITCOUNT n/AUTOMATIC Performs a COMMIT after every n records are imported. When a number n is specified, import performs a COMMIT after every n records are imported. When compound inserts are used, a user-specified commit frequency of n is rounded up to the first integer multiple of the compound count value. When AUTOMATIC is specified, import internally determines when a commit needs to be performed. The utility will commit for either one of two reasons: v to avoid running out of active log space v to avoid lock escalation from row level to table level If the ALLOW WRITE ACCESS option is specified, and the COMMITCOUNT option is not specified, the import utility will perform commits as if COMMITCOUNT AUTOMATIC had been specified. If the IMPORT command encounters an SQL0964C (Transaction Log Full) while inserting or updating a record and the COMMITCOUNT n is specified, IMPORT will attempt to resolve the issue by performing an unconditional commit and then reattempt to insert or update the record. If this does not help resolve the log full condition (which would be the case when the log full is attributed to other activity on the database), then the IMPORT command will fail as expected, however the number of rows committed may not be a multiple of the COMMITCOUNT n value. The RESTARTCOUNT or SKIPCOUNT option can be used to avoid processing those row already committed. CREATE Creates the table definition and row contents in the code page of the database. If the data was exported from a DB2 table, sub-table, or hierarchy, indexes are created. If this option operates on a hierarchy, and data was exported from DB2, a type hierarchy will also be created. This option can only be used with IXF files. This parameter is not valid when you import to a nickname. Note: If the data was exported from an MVS host database, and it contains LONGVAR fields whose lengths, calculated on the page size, are less
Chapter 3. CLP Commands
497
IMPORT
than 254, CREATE might fail because the rows are too long. See Using import to recreate an exported table for a list of restrictions. In this case, the table should be created manually, and IMPORT with INSERT should be invoked, or, alternatively, the LOAD command should be used. DEFAULT schema-sqlid This option can only be used when the USING XDS parameter is specified. The schema specified through the DEFAULT clause identifies a schema to use for validation when the XML Data Specifier (XDS) of an imported XML document does not contain an SCH attribute identifying an XML Schema. The DEFAULT clause takes precedence over the IGNORE and MAP clauses. If an XDS satisfies the DEFAULT clause, the IGNORE and MAP specifications will be ignored. FROM filename Specifies the file that contains the data to be imported. If the path is omitted, the current working directory is used. HIERARCHY Specifies that hierarchical data is to be imported. IGNORE schema-sqlid This option can only be used when the USING XDS parameter is specified. The IGNORE clause specifies a list of one or more schemas to ignore if they are identified by an SCH attribute. If an SCH attribute exists in the XML Data Specifier for an imported XML document, and the schema identified by the SCH attribute is included in the list of schemas to IGNORE, then no schema validation will occur for the imported XML document. If a schema is specified in the IGNORE clause, it cannot also be present in the left side of a schema pair in the MAP clause. The IGNORE clause applies only to the XDS. A schema that is mapped by the MAP clause will not be subsequently ignored if specified by the IGNORE clause. IN tablespace-name Identifies the table space in which the table will be created. The table space must exist, and must be a REGULAR table space. If no other table space is specified, all table parts are stored in this table space. If this clause is not specified, the table is created in a table space created by the authorization ID. If none is found, the table is placed into the default table space USERSPACE1. If USERSPACE1 has been dropped, table creation fails. INDEX IN tablespace-name Identifies the table space in which any indexes on the table will be created. This option is allowed only when the primary table space specified in the IN clause is a DMS table space. The specified table space must exist, and must be a REGULAR or LARGE DMS table space. Note: Specifying which table space will contain an index can only be done when the table is created. insert-column Specifies the name of a column in the table or the view into which data is to be inserted.
498
Command Reference
IMPORT
INSERT Adds the imported data to the table without changing the existing table data. INSERT_UPDATE Adds rows of imported data to the target table, or updates existing rows (of the target table) with matching primary keys. INTO table-name Specifies the database table into which the data is to be imported. This table cannot be a system table, a declared temporary table or a summary table. One can use an alias for INSERT, INSERT_UPDATE, or REPLACE, except in the case of a down-level server, when the fully qualified or the unqualified table name should be used. A qualified table name is in the form: schema.tablename. The schema is the user name under which the table was created. LOBS FROM lob-path Specifies one or more paths that store LOB files. The names of the LOB data files are stored in the main data file (ASC, DEL, or IXF), in the column that will be loaded into the LOB column. The maximum number of paths that can be specified is 999. This will implicitly activate the LOBSINFILE behaviour. This parameter is not valid when you import to a nickname. LONG IN tablespace-name Identifies the table space in which the values of any long columns (LONG VARCHAR, LONG VARGRAPHIC, LOB data types, or distinct types with any of these as source types) will be stored. This option is allowed only if the primary table space specified in the IN clause is a DMS table space. The table space must exist, and must be a LARGE DMS table space. MAP schema-sqlid This option can only be used when the USING XDS parameter is specified. Use the MAP clause to specify alternate schemas to use in place of those specified by the SCH attribute of an XML Data Specifier (XDS) for each imported XML document. The MAP clause specifies a list of one or more schema pairs, where each pair represents a mapping of one schema to another. The first schema in the pair represents a schema that is referred to by an SCH attribute in an XDS. The second schema in the pair represents the schema that should be used to perform schema validation. If a schema is present in the left side of a schema pair in the MAP clause, it cannot also be specified in the IGNORE clause. Once a schema pair mapping is applied, the result is final. The mapping operation is non-transitive, and therefore the schema chosen will not be subsequently applied to another schema pair mapping. A schema cannot be mapped more than once, meaning that it cannot appear on the left side of more than one pair. MESSAGES message-file Specifies the destination for warning and error messages that occur during an import operation. If the file already exists, the import utility appends the information. If the complete path to the file is not specified, the utility uses the current directory and the default drive as the destination. If message-file is omitted, the messages are written to standard output.
Chapter 3. CLP Commands
499
IMPORT
METHOD L Specifies the start and end column numbers from which to import data. A column number is a byte offset from the beginning of a row of data. It is numbered starting from 1. Note: This method can only be used with ASC files, and is the only valid option for that file type. N Specifies the names of the columns to be imported. Note: This method can only be used with IXF files. P Specifies the field numbers of the input data fields to be imported. Note: This method can only be used with IXF or DEL files, and is the only valid option for the DEL file type. MODIFIED BY filetype-mod Specifies file type modifier options. See File type modifiers for the import utility. NOTIMEOUT Specifies that the import utility will not time out while waiting for locks. This option supersedes the locktimeout database configuration parameter. Other applications are not affected. NULL INDICATORS null-indicator-list This option can only be used when the METHOD L parameter is specified. That is, the input file is an ASC file. The null indicator list is a comma-separated list of positive integers specifying the column number of each null indicator field. The column number is the byte offset of the null indicator field from the beginning of a row of data. There must be one entry in the null indicator list for each data field defined in the METHOD L parameter. A column number of zero indicates that the corresponding data field always contains data. A value of Y in the NULL indicator column specifies that the column data is NULL. Any character other than Y in the NULL indicator column specifies that the column data is not NULL, and that column data specified by the METHOD L option will be imported. The NULL indicator character can be changed using the MODIFIED BY option, with the nullindchar file type modifier. OF filetype Specifies the format of the data in the input file: v ASC (non-delimited ASCII format) v DEL (delimited ASCII format), which is used by a variety of database manager and file manager programs v WSF (work sheet format), which is used by programs such as: Lotus 1-2-3 Lotus Symphony v IXF (integrated exchange format, PC version), which means it was exported from the same or another DB2 table. An IXF file also contains the table definition and definitions of any existing indexes, except when columns are specified in the SELECT statement. Th WSF file type is not supported when you import to a nickname.
500
Command Reference
IMPORT
REPLACE Deletes all existing data from the table by truncating the data object, and inserts the imported data. The table definition and the index definitions are not changed. This option can only be used if the table exists. If this option is used when moving data between hierarchies, only the data for an entire hierarchy, not individual subtables, can be replaced. This parameter is not valid when you import to a nickname. This option does not honour the CREATE TABLE statements NOT LOGGED INITIALLY (NLI) clause or the ALTER TABLE statements ACTIVE NOT LOGGED INITIALLY clause. If an import with the REPLACE option is performed within the same transaction as a CREATE TABLE or ALTER TABLE statement where the NLI clause is invoked, the import will not honor the NLI clause. All inserts will be logged. Workaround 1 Delete the contents of the table using the DELETE statement, then invoke the import with INSERT statement Workaround 2 Drop the table and recreate it, then invoke the import with INSERT statement. This limitation applies to DB2 UDB Version 7 and DB2 UDB Version 8 REPLACE_CREATE If the table exists, deletes all existing data from the table by truncating the data object, and inserts the imported data without changing the table definition or the index definitions. If the table does not exist, creates the table and index definitions, as well as the row contents, in the code page of the database. See Using import to recreate an exported table for a list of restrictions. This option can only be used with IXF files. If this option is used when moving data between hierarchies, only the data for an entire hierarchy, not individual subtables, can be replaced. This parameter is not valid when you import to a nickname. RESTARTCOUNT n Specifies that an import operation is to be started at record n + 1. The first n records are skipped. This option is functionally equivalent to SKIPCOUNT. RESTARTCOUNT and SKIPCOUNT are mutually exclusive. ROWCOUNT n Specifies the number n of physical records in the file to be imported (inserted or updated). Allows a user to import only n rows from a file, starting from the record determined by the SKIPCOUNT or RESTARTCOUNT options. If the SKIPCOUNT or RESTARTCOUNT options are not specified, the first n rows are imported. If SKIPCOUNT m or RESTARTCOUNT m is specified, rows m+1 to m+n are imported. When compound inserts are used, user specified rowcount n is rounded up to the first integer multiple of the compound count value. SKIPCOUNT n Specifies that an import operation is to be started at record n + 1. The first
501
IMPORT
n records are skipped. This option is functionally equivalent to RESTARTCOUNT. SKIPCOUNT and RESTARTCOUNT are mutually exclusive. STARTING sub-table-name A keyword for hierarchy only, requesting the default order, starting from sub-table-name. For PC/IXF files, the default order is the order stored in the input file. The default order is the only valid order for the PC/IXF file format. sub-table-list For typed tables with the INSERT or the INSERT_UPDATE option, a list of sub-table names is used to indicate the sub-tables into which data is to be imported. traversal-order-list For typed tables with the INSERT, INSERT_UPDATE, or the REPLACE option, a list of sub-table names is used to indicate the traversal order of the importing sub-tables in the hierarchy. UNDER sub-table-name Specifies a parent table for creating one or more sub-tables. WARNINGCOUNT n Stops the import operation after n warnings. Set this parameter if no warnings are expected, but verification that the correct file and table are being used is desired. If the import file or the target table is specified incorrectly, the import utility will generate a warning for each row that it attempts to import, which will cause the import to fail. If n is zero, or this option is not specified, the import operation will continue regardless of the number of warnings issued. XML FROM xml-path Specifies one or more paths that contain the XML files. XMLPARSE Specifies how XML documents are parsed. If this option is not specified, the parsing behaviour for XML documents will be determined by the value of the CURRENT XMLPARSE OPTION special register. STRIP WHITESPACE Specifies to remove whitespace when the XML document is parsed. PRESERVE WHITESPACE Specifies not to remove whitespace when the XML document is parsed. XMLVALIDATE Specifies that XML documents are validated against a schema, when applicable. USING XDS XML documents are validated against the XML schema identified by the XML Data Specifier (XDS) in the main data file. By default, if the XMLVALIDATE option is invoked with the USING XDS clause, the schema used to perform validation will be determined by the SCH attribute of the XDS. If an SCH attribute is not present in the XDS, no schema validation will occur unless a default schema is specified by the DEFAULT clause. The DEFAULT, IGNORE, and MAP clauses can be used to modify the schema determination behavior. These three optional clauses
502
Command Reference
IMPORT
apply directly to the specifications of the XDS, and not to each other. For example, if a schema is selected because it is specified by the DEFAULT clause, it will not be ignored if also specified by the IGNORE clause. Similarly, if a schema is selected because it is specified as the first part of a pair in the MAP clause, it will not be re-mapped if also specified in the second part of another MAP clause pair. USING SCHEMA schema-sqlid XML documents are validated against the XML schema with the specified SQL identifier. In this case, the SCH attribute of the XML Data Specifier (XDS) will be ignored for all XML columns. USING SCHEMALOCATION HINTS XML documents are validated against the schemas identified by XML schema location hints in the source XML documents. If a schemaLocation attribute is not found in the XML document, no validation will occur. When the USING SCHEMALOCATION HINTS clause is specified, the SCH attribute of the XML Data Specifier (XDS) will be ignored for all XML columns. See examples of the XMLVALIDATE option below. Examples: Example 1 The following example shows how to import information from myfile.ixf to the STAFF table:
db2 import from myfile.ixf of ixf messages msg.txt insert into staff SQL3150N The H record in the PC/IXF file has product "DB2 "19970220", and time "140848". SQL3153N The T record in the PC/IXF file has name "myfile", qualifier " ", and source " ". SQL3109N The utility is beginning to load data from file "myfile". SQL3110N The utility has completed processing. "58" rows were read from the input file. SQL3221W ...Begin COMMIT WORK. Input Record Count = "58". SQL3222W ...COMMIT of any database changes was successful. SQL3149N "58" rows were processed from the input file. "58" rows were successfully inserted into the table. "0" rows were rejected. 01.00", date
Example 2 (Importing into a Table with an Identity Column) TABLE1 has 4 columns: v v v v C1 C2 C3 C4 VARCHAR(30) INT GENERATED BY DEFAULT AS IDENTITY DECIMAL(7,2) CHAR(1)
TABLE2 is the same as TABLE1, except that C2 is a GENERATED ALWAYS identity column.
Chapter 3. CLP Commands
503
IMPORT
Data records in DATAFILE1 (DEL format):
"Liszt" "Hummel",,187.43, H "Grieg",100, 66.34, G "Satie",101, 818.23, I
The following command generates identity values for rows 1 and 2, since no identity values are supplied in DATAFILE1 for those rows. Rows 3 and 4, however, are assigned the user-supplied identity values of 100 and 101, respectively.
db2 import from datafile1.del of del replace into table1
To import DATAFILE1 into TABLE1 so that identity values are generated for all rows, issue one of the following commands:
db2 import replace db2 import replace from into from into datafile1.del of del method P(1, 3, 4) table1 (c1, c3, c4) datafile1.del of del modified by identityignore table1
To import DATAFILE2 into TABLE1 so that identity values are generated for each row, issue one of the following commands:
db2 import from datafile2.del of del replace into table1 (c1, c3, c4) db2 import from datafile2.del of del modified by identitymissing replace into table1
If DATAFILE1 is imported into TABLE2 without using any of the identity-related file type modifiers, rows 1 and 2 will be inserted, but rows 3 and 4 will be rejected, because they supply their own non-NULL values, and the identity column is GENERATED ALWAYS. Examples of using the XMLVALIDATE clause: Example 1 (XMLVALIDATE USING XDS) For the following XMLVALIDATE clause:
XMLVALIDATE USING XDS IGNORE (S1.SCHEMA_A) MAP ((S1.SCHEMA_A, S2.SCHEMA_B))
The import would fail due to invalid syntax, since the IGNORE of S1.SCHEMA_A would conflict with the MAP of S1.SCHEMA_A to S2.SCHEMA_B. Example 2 (XMLVALIDATE USING XDS) For the following XMLVALIDATE clause:
XMLVALIDATE USING XDS DEFAULT S8.SCHEMA_H IGNORE (S9.SCHEMA_I, S10.SCHEMA_J) MAP ((S1.SCHEMA_A, S2.SCHEMA_B), (S3.SCHEMA_C, S5.SCHEMA_E), (S6.SCHEMA_F, S3.SCHEMA_C), (S4.SCHEMA_D, S7.SCHEMA_G))
504
Command Reference
IMPORT
<XDS FIL=xmlfile.001.xml />
The XML schema with SQL identifier S8.SCHEMA_H is used to validate the document in file xmlfile.001.xml, since S8.SCHEMA_H was specified as the default schema to use. For an XML column that contains the following XDS:
<XDS FIL=xmlfile.002.xml OFF=10 LEN=500 SCH=S10.SCHEMA_J />
No schema validation occurs for the document in file xmlfile.002.xml, since although the XDS specifies S10.SCHEMA_J as the schema to use, that schema is part of the IGNORE clause. The document contents can be found at byte offset 10 in the file (meaning the 11th byte), and is 500 bytes long. For an XML column that contains the following XDS:
<XDS FIL=xmlfile.003.xml SCH=S6.SCHEMA_F />
The XML schema with SQL identifier S3.SCHEMA_C is used to validate the document in file xmlfile.003.xml. This is because the MAP clause specifies that schema S6.SCHEMA_F should be mapped to schema S3.SCHEMA_C. Note that further mapping does not take place, therefore the mapping of schema S3.SCHEMA_C to schema S5.SCHEMA_E does not apply in this case. For an XML column that contains the following XDS:
<XDS FIL=xmlfile.004.xml SCH=S11.SCHEMA_K />
The XML schema with SQL identifier S11.SCHEMA_K is used to validate the document in file xmlfile.004.xml. Note that none of the DEFAULT, IGNORE, or MAP specifications apply in this case. Example 3 (XMLVALIDATE USING XDS) For the following XMLVALIDATE clause:
XMLVALIDATE USING XDS DEFAULT S1.SCHEMA_A IGNORE (S1.SCHEMA_A)
The XML schema with SQL identifier S1.SCHEMA_A is used to validate the document in file xmlfile.001.xml, since S1.SCHEMA_1 was specified as the default schema to use. For an XML column that contains the following XDS:
<XDS FIL=xmlfile.002.xml SCH=S1.SCHEMA_A />
No schema validation occurs for the document in file xmlfile.002, since although the XDS specifies S1.SCHEMA_A as the schema to use, that schema is part of the IGNORE clause. Example 4 (XMLVALIDATE USING XDS) For the following XMLVALIDATE clause:
505
IMPORT
XMLVALIDATE USING XDS DEFAULT S1.SCHEMA_A MAP ((S1.SCHEMA_A, S2.SCHEMA_B), (S2.SCHEMA_B, S1.SCHEMA_A))
The XML schema with SQL identifier S1.SCHEMA_A is used to validate the document in file xmlfile.001.xml, since S1.SCHEMA_1 was specified as the default schema to use. Note that since the DEFAULT clause was applied, the MAP clause is not subsequently applied. Therefore the mapping of schema S1.SCHEMA_A to schema S2.SCHEMA_B does not apply in this case. For an XML column that contains the following XDS:
<XDS FIL=xmlfile.002.xml SCH=S1.SCHEMA_A />
The XML schema with SQL identifier S2.SCHEMA_B is used to validate the document in file xmlfile.002.xml. This is because the MAP clause specifies that schema S1.SCHEMA_A should be mapped to schema S2.SCHEMA_B. Note that further mapping does not take place, therefore the mapping of schema S2.SCHEMA_B to schema S1.SCHEMA_A does not apply in this case. For an XML column that contains the following XDS:
<XDS FIL=xmlfile.003.xml SCH=S2.SCHEMA_B />
The XML schema with SQL identifier S1.SCHEMA_A is used to validate the document in file xmlfile.003.xml. This is because the MAP clause specifies that schema S2.SCHEMA_B should be mapped to schema S1.SCHEMA_A. Note that further mapping does not take place, therefore the mapping of schema S1.SCHEMA_A to schema S2.SCHEMA_B does not apply in this case. Example 5 (XMLVALIDATE USING SCHEMA) For the following XMLVALIDATE clause:
XMLVALIDATE USING SCHEMA S2.SCHEMA_B
is validated using the XML schema with SQL identifier S2.SCHEMA_B. For an XML column that contains the following XDS:
<XDS FIL=xmlfile.002.xml SCH=S1.SCHEMA_A />
The document in file xmlfile.002.xml is validated using the XML schema with SQL identifier S2.SCHEMA_B. Note that the SCH attribute is ignored, since validation is being performed using a schema specified by the USING SCHEMA clause. Example 6 (XMLVALIDATE USING USING SCHEMALOCATION HINTS) For an XML column that contains the following XDS:
506
Command Reference
IMPORT
<XDS FIL=xmlfile.001.xml />
The XML schema used is determined by the schemaLocation attribute in the document contents, and no validation would occur if one is not present. For an XML column that contains the following XDS:
<XDS FIL=xmlfile.002.xml SCH=S1.SCHEMA_A />
The XML schema used is determined by the schemaLocation attribute in the document contents, and no validation would occur if one is not present. Note that the SCH attribute is ignored, since validation is being performed using SCHEMALOCATION HINTS. Usage notes: Be sure to complete all table operations and release all locks before starting an import operation. This can be done by issuing a COMMIT after closing all cursors opened WITH HOLD, or by issuing a ROLLBACK. The import utility adds rows to the target table using the SQL INSERT statement. The utility issues one INSERT statement for each row of data in the input file. If an INSERT statement fails, one of two actions result: v If it is likely that subsequent INSERT statements can be successful, a warning message is written to the message file, and processing continues. v If it is likely that subsequent INSERT statements will fail, and there is potential for database damage, an error message is written to the message file, and processing halts. The utility performs an automatic COMMIT after the old rows are deleted during a REPLACE or a REPLACE_CREATE operation. Therefore, if the system fails, or the application interrupts the database manager after the table object is truncated, all of the old data is lost. Ensure that the old data is no longer needed before using these options. If the log becomes full during a CREATE, REPLACE, or REPLACE_CREATE operation, the utility performs an automatic COMMIT on inserted records. If the system fails, or the application interrupts the database manager after an automatic COMMIT, a table with partial data remains in the database. Use the REPLACE or the REPLACE_CREATE option to rerun the whole import operation, or use INSERT with the RESTARTCOUNT parameter set to the number of rows successfully imported. By default, automatic COMMITs are not performed for the INSERT or the INSERT_UPDATE option. They are, however, performed if the COMMITCOUNT parameter is not zero. If automatic COMMITs are not performed, a full log results in a ROLLBACK. Offline import does not perform automatic COMMITs if any of the following conditions is true: v the target is a view, not a table v compound inserts are used v buffered inserts are used
507
IMPORT
By default, online import performs automatic COMMITs to free both the active log space and the lock list. Automatic COMMITs are not performed only if a COMMITCOUNT value of zero is specified. Whenever the import utility performs a COMMIT, two messages are written to the message file: one indicates the number of records to be committed, and the other is written after a successful COMMIT. When restarting the import operation after a failure, specify the number of records to skip, as determined from the last successful COMMIT. The import utility accepts input data with minor incompatibility problems (for example, character data can be imported using padding or truncation, and numeric data can be imported with a different numeric data type), but data with major incompatibility problems is not accepted. One cannot REPLACE or REPLACE_CREATE an object table if it has any dependents other than itself, or an object view if its base table has any dependents (including itself). To replace such a table or a view, do the following: 1. Drop all foreign keys in which the table is a parent. 2. Run the import utility. 3. Alter the table to recreate the foreign keys. If an error occurs while recreating the foreign keys, modify the data to maintain referential integrity. Referential constraints and foreign key definitions are not preserved when creating tables from PC/IXF files. (Primary key definitions are preserved if the data was previously exported using SELECT *.) Importing to a remote database requires enough disk space on the server for a copy of the input data file, the output message file, and potential growth in the size of the database. If an import operation is run against a remote database, and the output message file is very long (more than 60KB), the message file returned to the user on the client might be missing messages from the middle of the import operation. The first 30KB of message information and the last 30KB of message information are always retained. Importing PC/IXF files to a remote database is much faster if the PC/IXF file is on a hard drive rather than on diskettes. The database table or hierarchy must exist before data in the ASC, DEL, or WSF file formats can be imported; however, if the table does not already exist, IMPORT CREATE or IMPORT REPLACE_CREATE creates the table when it imports data from a PC/IXF file. For typed tables, IMPORT CREATE can create the type hierarchy and the table hierarchy as well. PC/IXF import should be used to move data (including hierarchical data) between databases. If character data containing row separators is exported to a delimited ASCII (DEL) file and processed by a text transfer program, fields containing the row separators will shrink or expand. The file copying step is not necessary if the source and the target databases are both accessible from the same client.
508
Command Reference
IMPORT
The data in ASC and DEL files is assumed to be in the code page of the client application performing the import. PC/IXF files, which allow for different code pages, are recommended when importing data in different code pages. If the PC/IXF file and the import utility are in the same code page, processing occurs as for a regular application. If the two differ, and the FORCEIN option is specified, the import utility assumes that data in the PC/IXF file has the same code page as the application performing the import. This occurs even if there is a conversion table for the two code pages. If the two differ, the FORCEIN option is not specified, and there is a conversion table, all data in the PC/IXF file will be converted from the file code page to the application code page. If the two differ, the FORCEIN option is not specified, and there is no conversion table, the import operation will fail. This applies only to PC/IXF files on DB2 clients on the AIX operating system. For table objects on an 8 KB page that are close to the limit of 1012 columns, import of PC/IXF data files might cause DB2 to return an error, because the maximum size of an SQL statement was exceeded. This situation can occur only if the columns are of type CHAR, VARCHAR, or CLOB. The restriction does not apply to import of DEL or ASC files. If PC/IXF files are being used to create a new table, an alternative is use db2look to dump the DDL statement that created the table, and then to issue that statement through the CLP. DB2 Connect can be used to import data to DRDA servers such as DB2 for OS/390, DB2 for VM and VSE, and DB2 for OS/400. Only PC/IXF import (INSERT option) is supported. The RESTARTCOUNT parameter, but not the COMMITCOUNT parameter, is also supported. When using the CREATE option with typed tables, create every sub-table defined in the PC/IXF file; sub-table definitions cannot be altered. When using options other than CREATE with typed tables, the traversal order list enables one to specify the traverse order; therefore, the traversal order list must match the one used during the export operation. For the PC/IXF file format, one need only specify the target sub-table name, and use the traverse order stored in the file. The import utility can be used to recover a table previously exported to a PC/IXF file. The table returns to the state it was in when exported. Data cannot be imported to a system table, a declared temporary table, or a summary table. Views cannot be created through the import utility. On the Windows operating system: v Importing logically split PC/IXF files is not supported. v Importing bad format PC/IXF or WSF files is not supported. Security labels in their internal format might contain newline characters. If you import the file using the DEL file format, those newline characters can be mistaken for delimiters. If you have this problem use the older default priority for delimiters by specifying the delprioritychar file type modifier in the IMPORT command. Federated considerations: When using the IMPORT command and the INSERT, UPDATE, or INSERT_UPDATE command parameters, you must ensure that you have
Chapter 3. CLP Commands
509
IMPORT
CONTROL privilege on the participating nickname. You must ensure that the nickname you wish to use when doing an import operation already exists. There are also several restrictions you should be aware of as shown in the IMPORT command parameters section. Related concepts: v Import Overview in Data Movement Utilities Guide and Reference v Privileges, authorities, and authorization required to use import in Data Movement Utilities Guide and Reference Related tasks: v Importing data in Data Movement Utilities Guide and Reference Related reference: v XMLPARSE scalar function in SQL Reference, Volume 1 v ADMIN_CMD procedure Run administrative commands in Administrative SQL Routines and Views v db2look - DB2 statistics and DDL extraction tool on page 146 v IMPORT command using the ADMIN_CMD procedure in Administrative SQL Routines and Views v Import sessions - CLP examples in Data Movement Utilities Guide and Reference v LOB and XML file behavior with regard to import and export in Data Movement Utilities Guide and Reference
510
Command Reference
INITIALIZE TAPE
INITIALIZE TAPE
Initializes tapes for backup and restore operations to streaming tape devices. This command is only supported on Windows operating systems. Authorization: One of the following: v sysadm v sysctrl v sysmaint Required connection: None. Command syntax:
INITIALIZE TAPE ON device USING blksize
Command parameters: ON device Specifies a valid tape device name. The default value is \\.\TAPE0. USING blksize Specifies the block size for the device, in bytes. The device is initialized to use the block size specified, if the value is within the supported range of block sizes for the device. The buffer size specified for the BACKUP DATABASE command and for RESTORE DATABASE must be divisible by the block size specified here. If a value for this parameter is not specified, the device is initialized to use its default block size. If a value of zero is specified, the device is initialized to use a variable length block size; if the device does not support variable length block mode, an error is returned. When backing up to tape, use of a variable block size is currently not supported. If you must use this option, ensure that you have well tested procedures in place that enable you to recover successfully, using backup images that were created with a variable block size. When using a variable block size, you must specify a backup buffer size that is less than or equal to the maximum limit for the tape devices that you are using. For optimal performance, the buffer size must be equal to the maximum block size limit of the device being used. Related reference: v BACKUP DATABASE on page 349 v RESTORE DATABASE on page 675 v REWIND TAPE on page 690 v SET TAPE POSITION on page 723 v INITIALIZE TAPE command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
Chapter 3. CLP Commands
511
INSPECT
INSPECT
Inspect database for architectural integrity, checking the pages of the database for page consistency. The inspection checks the structures of table objects and structures of table spaces are valid. Scope: In a single partition database environment, the scope is that single partition only. In a partitioned database system, it is the collection of all logical partitions defined in db2nodes.cfg. For partitioned tables, the CHECK DATABASE and CHECK TABLESPACE options include individual data partitions and non-partitioned indexes. The CHECK TABLE option is also available for a partitioned table, however it will check all data partitions and indexes in a table, rather than checking a single data partition or index. Authorization: For INSPECT CHECK, one of the following: v sysadm v dbadm v sysctrl v sysmaint v CONTROL privilege if single table. Required Connection: Database Command Syntax:
INSPECT Check Clause Row Compression Estimate Clause
LIMIT ERROR TO DEFAULT RESULTS LIMIT ERROR TO n ALL Level Clause KEEP
Check Clause:
CHECK DATABASE BEGIN TBSPACEID TABLESPACE TABLE n OBJECTID n NAME tablespace-name TBSPACEID n BEGIN OBJECTID NAME table-name SCHEMA schema-name TBSPACEID n OBJECTID n
512
Command Reference
INSPECT
Row Compression Estimate Clause:
ROWCOMPESTIMATE-TABLE NAME table-name SCHEMA schema-name TBSPACEID n OBJECTID n
Level Clause:
EXTENTMAP NORMAL EXTENTMAP NONE LOW DATA NORMAL DATA NONE LOW BLOCKMAP NORMAL BLOCKMAP NONE LOW
, ( db-partition-number1 TO db-partition-number2 )
Command Parameters: CHECK Specifies check processing. DATABASE Specifies whole database. BEGIN TBSPACEID n Specifies processing to begin from table space with given table space ID number. BEGIN TBSPACEID n OBJECTID n Specifies processing to begin from table with given table space ID number and object ID number. TABLESPACE
Chapter 3. CLP Commands
513
INSPECT
NAME tablespace-name Specifies single table space with given table space name. TBSPACEID n Specifies single table space with given table space ID number. BEGIN OBJECTID n Specifies processing to begin from table with given object ID number. TABLE NAME table-name Specifies table with given table name. SCHEMA schema-name Specifies schema name for specified table name for single table operation. TBSPACEID n OBJECTID n Specifies table with given table space ID number and object ID number. ROWCOMPESTIMATE Estimates the effectiveness of row compression for a table. You can also specify which database partition(s) this operation is to be done on. This tool is capable of taking a sample of the table data, and building a dictionary from it. This dictionary can then be used to test compression against the records contained in the sample. From this test compression, data is be gathered from which the following estimates are made: v Percentage of bytes saved from compression v Percentage of pages saved from compression v Percentage rows ineligible for compression due to small data size v Compression dictionary size v Expansion dictionary size INSPECT will insert the dictionary built for gathering these compression estimates if the COMPRESS YES attribute is set for this table, and a dictionary does not already exist for this table. INSPECT will attempt to insert the dictionary concurrent to other applications accessing the table. Dictionary insert requires an Exclusive Table Alter lock and an Intent on Exclusive Table lock. INSPECT will only insert a dictionary into tables that support Row Compression. For partitioned tables, a separate dictionary is built and inserted on each partition. RESULTS Specifies the result output file. The file will be written out to the diagnostic data directory path. If there is no error found by the check processing, this result output file will be erased at the end of the INSPECT operation. If there are errors found by the check processing, this result output file will not be erased at the end of the INSPECT operation. KEEP Specifies to always keep the result output file. file-name Specifies the name for the result output file. ALL DBPARTITIONNUMS Specifies that operation is to be done on all database partitions specified in the db2nodes.cfg file. This is the default if a node clause is not specified.
514
Command Reference
INSPECT
EXCEPT Specifies that operation is to be done on all database partitions specified in the db2nodes.cfg file, except those specified in the node list. ON DBPARTITIONNUM / ON DBPARTITIONNUMS Perform operation on a set of database partitions. db-partition-number1 Specifies a database partition number in the database partition list. db-partition-number2 Specifies the second database partition number, so that all database partitions from db-partition-number1 up to and including db-partition-number2 are included in the database partition list. FOR ERROR STATE ALL For table object with internal state already indicating error state, the check will just report this status and not scan through the object. Specifying this option will have the processing scan through the object even if internal state already lists error state. LIMIT ERROR TO n Number of pages in error for an object to limit reporting for. When this limit of the number of pages in error for an object is reached, the processing will discontinue the check on the rest of the object. LIMIT ERROR TO DEFAULT Default number of pages in error for an object to limit reporting for. This value is the extent size of the object. This parameter is the default. LIMIT ERROR TO ALL No limit on number of pages in error reported. EXTENTMAP NORMAL Specifies processing level is normal for extent map. Default. NONE Specifies processing level is none for extent map. LOW DATA NORMAL Specifies processing level is normal for data object. Default. NONE Specifies processing level is none for data object. LOW BLOCKMAP NORMAL Specifies processing level is normal for block map object. Default. NONE Specifies processing level is none for block map object. LOW INDEX Specifies processing level is low for block map object. Specifies processing level is low for data object. Specifies processing level is low for extent map.
515
INSPECT
NORMAL Specifies processing level is normal for index object. Default. NONE Specifies processing level is none for index object. LOW LONG NORMAL Specifies processing level is normal for long object. Default. NONE Specifies processing level is none for long object. LOW LOB NORMAL Specifies processing level is normal for LOB object. Default. NONE Specifies processing level is none for LOB object. LOW XML NORMAL Specifies processing level is normal for XML column object. Default. Pages of XML object will be checked for most inconsistencies. Actual XML data will not be inspected. NONE Specifies processing level is none for XML column object. XML object will not be inspected at all. LOW Specifies processing level is low for XML column object. Pages of XML object will be checked for some inconsistencies. Actual XML data will not be inspected. Specifies processing level is low for LOB object. Specifies processing level is low for long object. Specifies processing level is low for index object.
Usage Notes: 1. For check operations on table objects, the level of processing can be specified for the objects. The default is NORMAL level, specifying NONE for an object excludes it. Specifying LOW will do subset of checks that are done for NORMAL. 2. The check database can be specified to start from a specific table space or from a specific table by specifying the ID value to identify the table space or the table. 3. The check table space can be specified to start from a specific table by specifying the ID value to identify the table. 4. The processing of table spaces will affect only the objects that reside in the table space. 5. The online inspect processing will access database objects using isolation level uncommitted read. COMMIT processing will be done during INSPECT processing. It is advisable to end the unit of work by issuing a COMMIT or ROLLBACK before invoking INSPECT. 6. The online inspect check processing will write out unformatted inspection data results to the results file specified. The file will be written out to the diagnostic
516
Command Reference
INSPECT
data directory path. If there is no error found by the check processing, this result output file will be erased at the end of INSPECT operation. If there are errors found by the check processing, this result output file will not be erased at the end of INSPECT operation. After check processing completes, to see inspection details, the inspection result data will require to be formatted out with the utility db2inspf. The results file will have file extension of the database partition number. 7. In a partitioned database environment, each database partition will generate its own results output file with extension corresponding to its database partition number. The output location for the results output file will be the database manager diagnostic data directory path. If the name of a file that already exists is specified, the operation will not be processed, the file will have to be removed before that file name can be specified. 8. Normal online inspect processing will access database objects using isolation level uncommitted read. Inserting a compression dictionary into the table will attempt to acquire write locks. Please refer to the ROWCOMPESTIMATE option for details on dictionary insert locking. Commit processing will be done during the inspect processing. It is advisable to end the unit of work by issuing a COMMIT or ROLLBACK before starting the inspect operation. Related reference: v db2Inspect API - Inspect database for architectural integrity in Administrative API Reference
517
Command parameters: AT DBPARTITIONNUM db-partition-number Specifies the database partition for which the status of the monitor switches is to be displayed. global Returns an aggregate result for all nodes in a partitioned database system. Examples: Following is sample output from the LIST ACTIVE DATABASES command:
Active Databases Database name Applications connected currently Database path Database name Applications connected currently Database path = TEST = 0 = /home/smith/smith/NODE0000/SQL00002/ = SAMPLE = 1 = /home/smith/smith/NODE0000/SQL00001/
Compatibilities: For compatibility with versions earlier than Version 8: v The keyword NODE can be substituted for DBPARTITIONNUM.
518
Command Reference
519
LIST APPLICATIONS
LIST APPLICATIONS
Displays to standard output the application program name, authorization ID (user name), application handle, application ID, and database name of all active database applications. This command can also optionally display an applications sequence number, status, status change time, and database path. Scope: This command only returns information for the database partition on which it is issued. Authorization: One of the following: v v v v sysadm sysctrl sysmaint sysmon
Required connection: Instance. To list applications for a remote instance, it is necessary to first attach to that instance. Command syntax:
LIST APPLICATIONS FOR DATABASE DB database-alias
AT DBPARTITIONNUM GLOBAL
db-partition-number
SHOW DETAIL
Command parameters: FOR DATABASE database-alias Information for each application that is connected to the specified database is to be displayed. Database name information is not displayed. If this option is not specified, the command displays the information for each application that is currently connected to any database at the database partition to which the user is currently attached. The default application information is comprised of the following: v Authorization ID v Application program name v Application handle v Application ID v Database name. AT DBPARTITIONNUM db-partition-number Specifies the database partition for which the status of the monitor switches is to be displayed.
520
Command Reference
LIST APPLICATIONS
GLOBAL Returns an aggregate result for all database partitions in a partitioned database system. SHOW DETAIL Output will include the following additional information: v Sequence # v Application status v Status change time v Database path. If this option is specified, it is recommended that the output be redirected to a file, and that the report be viewed with the help of an editor. The output lines might wrap around when displayed on the screen. Examples: To list detailed information about the applications connected to the SAMPLE database, issue:
list applications for database sample show detail
Usage notes: The database administrator can use the output from this command as an aid to problem determination. In addition, this information is required if the database administrator wants to use the GET SNAPSHOT command or the FORCE APPLICATION command in an application. To list applications at a remote instance (or a different local instance), it is necessary to first attach to that instance. If FOR DATABASE is specified when an attachment exists, and the database resides at an instance which differs from the current attachment, the command will fail. Compatibilities: For compatibility with versions earlier than Version 8: v The keyword NODE can be substituted for DBPARTITIONNUM. Related reference: v GET SNAPSHOT on page 487 v FORCE APPLICATION on page 439 v APPLICATIONS administrative view Retrieve connected database application information in Administrative SQL Routines and Views v FORCE APPLICATION command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
521
Command parameters: None Examples: The following is sample output from LIST COMMAND OPTIONS:
Command Line Processor Option Settings Backend process wait time (seconds) No. of retries to connect to backend Request queue wait time (seconds) Input queue wait time (seconds) Command options Option ------a -c -d -e -f -l -n -o -p -r -s -t -v -w -z (DB2BQTIME) (DB2BQTRY) (DB2RQTIME) (DB2IQTIME) (DB2OPTIONS) = = = = = 1 60 5 5
Description ---------------------------------------Display SQLCA Auto-Commit XML declarations Display SQLCODE/SQLSTATE Read from input file Log commands in history file Remove new line character Display output Display interactive input prompt Save output to report file Stop execution on command error Set statement termination character Echo current command Display FETCH/SELECT warning messages Save all output to output file
Current Setting --------------OFF ON OFF OFF OFF OFF OFF ON ON OFF OFF OFF OFF ON OFF
522
Command Reference
Command parameters: ON path/drive Specifies the local database directory from which to list information. If not specified, the contents of the system database directory are listed. Note that the instance name is implied in the path. Please do not specify the instance name as part of the path. Examples: The following shows sample output for a system database directory:
System Database Directory Number of entries in the directory = 2 Database 1 entry: Database alias Database name Local database directory Database release level Comment Directory entry type Catalog database partition number Alternate server hostname Alternate server port number Database 2 entry: Database alias Database name Node name Database release level Comment = = = = = = = = = = = = = = SAMPLE SAMPLE /home/smith 8.00 Indirect 0 montero 29384 TC004000 TC004000 PRINODE a.00
523
These fields are identified as follows: Database alias The value of the alias parameter when the database was created or cataloged. If an alias was not entered when the database was cataloged, the database manager uses the value of the database-name parameter when the database was cataloged. Database name The value of the database-name parameter when the database was cataloged. This name is usually the name under which the database was created. Local database directory The path on which the database resides. This field is filled in only if the system database directory has been scanned. Database directory The name of the directory where the database resides. This field is filled in only if the local database directory has been scanned. Node name The name of the remote node. This name corresponds to the value entered for the nodename parameter when the database and the node were cataloged. Database release level The release level of the database manager that can operate on the database. Comment Any comments associated with the database that were entered when it was cataloged. Directory entry type The location of the database: v A remote entry describes a database that resides on another node. v An indirect entry describes a database that is local. Databases that reside on the same node as the system database directory are thought to indirectly reference the home entry (to a local database directory), and are considered indirect entries.
524
Command Reference
525
Command parameters: SHOW DETAIL Specifies that the output should include the following information: v Distribution map ID v Database partition number v In-use flag Examples: Following is sample output from the LIST DATABASE PARTITION GROUPS command:
DATABASE PARTITION GROUP NAME ----------------------------IBMCATGROUP IBMDEFAULTGROUP 2 record(s) selected.
Following is sample output from the LIST DATABASE PARTITION GROUPS SHOW DETAIL command:
DATABASE PARTITION GROUP NAME PMAP_ID DATABASE PARTITION NUMBER IN_USE ------------------------------ ------- ------------------------- -----IBMCATGROUP 0 0 Y IBMDEFAULTGROUP 1 0 Y 2 record(s) selected.
526
Command Reference
Compatibilities: For compatibility with versions earlier than Version 8: v The keyword NODEGROUPS can be substituted for DATABASE PARTITION GROUPS. Related reference: v REDISTRIBUTE DATABASE PARTITION GROUP on page 633
527
LIST DBPARTITIONNUMS
LIST DBPARTITIONNUMS
Lists all database partitions associated with the current database. Scope: This command can be issued from any database partition that is listed in $HOME/sqllib/db2nodes.cfg. It returns the same information from any of these database partitions. Authorization: None Required connection: Database Command syntax:
LIST DBPARTITIONNUMS
Command parameters: None Examples: Following is sample output from the LIST DBPARTITIONNUMS command:
DATABASE PARTITION NUMBER ------------------------0 2 5 7 9 5 record(s) selected.
Compatibilities: For compatibility with versions earlier than Version 8: v The keyword NODES can be substituted for DBPARTITIONNUMS. Related reference: v REDISTRIBUTE DATABASE PARTITION GROUP on page 633
528
Command Reference
Command parameters: LIST DCS APPLICATIONS The default application information includes: v Host authorization ID (username) v Application program name v Application handle v Outbound application ID (luwid). SHOW DETAIL Specifies that output include the following additional information: v Client application ID v Client sequence number v Client database alias v Client node name (nname) v Client release level v Client code page v Outbound sequence number v Host database name v Host release level. EXTENDED Generates an extended report. This report includes all of the fields that are listed when the SHOW DETAIL option is specified, plus the following additional fields: v DCS application status v Status change time v Client platform
Chapter 3. CLP Commands
529
Notes: 1. The application status field contains one of the following values: connect pending - outbound Denotes that the request to connect to a host database has been issued, and that DB2 Connect is waiting for the connection to be established. waiting for request Denotes that the connection to the host database has been established, and that DB2 Connect is waiting for an SQL statement from the client application. waiting for reply Denotes that the SQL statement has been sent to the host database. 2. The status change time is shown only if the System Monitor UOW switch was turned on during processing. Otherwise, Not Collected is shown. Usage notes: The database administrator can use this command to match client application connections to the gateway with corresponding host connections from the gateway. The database administrator can also use agent ID information to force specified applications off a DB2 Connect server. Related reference: v FORCE APPLICATION on page 439
530
Command Reference
Command parameters: None Examples: The following is sample output from LIST DCS DIRECTORY:
Database Connection Services (DCS) Directory Number of entries in the directory = 1 DCS 1 entry: Local database name Target database name Application requestor name DCS parameters Comment DCS directory release level = = = = = = DB2 DSN_DB_1 DB2/MVS Location name DSN_DB_1 0x0100
These fields are identified as follows: Local database name Specifies the local alias of the target host database. This corresponds to the database-name parameter entered when the host database was cataloged in the DCS directory. Target database name Specifies the name of the host database that can be accessed. This corresponds to the target-database-name parameter entered when the host database was cataloged in the DCS directory. Application requester name Specifies the name of the program residing on the application requester or server. DCS parameters String that contains the connection and operating environment parameters to use with the application requester. Corresponds to the parameter string entered when the host database was cataloged. The string must be enclosed by double quotation marks, and the parameters must be separated by commas. Comment Describes the database entry.
Chapter 3. CLP Commands
531
532
Command Reference
Command parameters: WITH PROMPTING Indicates that indoubt transactions are to be processed. If this parameter is specified, an interactive dialog mode is initiated, permitting the user to commit or roll back indoubt transactions. If this parameter is not specified, indoubt transactions are written to the standard output device, and the interactive dialog mode is not initiated. A forget option is not supported. Once the indoubt transaction is committed or rolled back, the transaction is automatically forgotten. Interactive dialog mode permits the user to: v List all indoubt transactions (enter l) v List indoubt transaction number x (enter l, followed by a valid transaction number) v Quit (enter q) v Commit transaction number x (enter c, followed by a valid transaction number) v Roll back transaction number x (enter r, followed by a valid transaction number). A blank space must separate the command letter from its argument. Before a transaction is committed or rolled back, the transaction data is displayed, and the user is asked to confirm the action. Usage notes: DRDA indoubt transactions occur when communication is lost between coordinators and participants in distributed units of work. A distributed unit of work lets a user or application read and update data at multiple locations within a single unit of work. Such work requires a two-phase commit. The first phase requests all the participants to prepare for a commit. The second phase commits or rolls back the transactions. If a coordinator or participant becomes unavailable after the first phase, the distributed transactions are indoubt.
Chapter 3. CLP Commands
533
534
Command Reference
LIST HISTORY
LIST HISTORY
Lists entries in the history file. The history file contains a record of recovery and administrative events. Recovery events include full database and table space level backup, incremental backup, restore, and rollforward operations. Additional logged events include create, alter, drop, or rename table space, reorganize table, drop table, and load. Authorization: None Required connection: Instance. You must attach to any remote database in order to run this command against it. For a local database, an explicit attachment is not required. Command syntax:
LIST HISTORY BACKUP ROLLFORWARD DROPPED TABLE LOAD CREATE TABLESPACE ALTER TABLESPACE RENAME TABLESPACE REORG ARCHIVE LOG ALL SINCE timestamp CONTAINING schema.object_name object_name FOR DATABASE DB database-alias
Command parameters: HISTORY Lists all events that are currently logged in the history file. BACKUP Lists backup and restore operations. ROLLFORWARD Lists rollforward operations. DROPPED TABLE Lists dropped table records. A dropped table record is created only when the table is dropped and the table space containing it has the DROPPED TABLE RECOVERY option enabled. Returns the CREATE TABLE syntax for partitioned tables and indicates which table spaces contained data for the table that was dropped. LOAD Lists load operations. CREATE TABLESPACE Lists table space create and drop operations.
535
LIST HISTORY
RENAME TABLESPACE Lists table space renaming operations. REORG Lists reorganization operations. Includes information for each reorganized data partition of a partitioned table. ALTER TABLESPACE Lists alter table space operations. ARCHIVE LOG Lists archive log operations and the archived logs. ALL Lists all entries of the specified type in the history file.
SINCE timestamp A complete time stamp (format yyyymmddhhmmss), or an initial prefix (minimum yyyy) can be specified. All entries with time stamps equal to or greater than the time stamp provided are listed. CONTAINING schema.object_name This qualified name uniquely identifies a table. CONTAINING object_name This unqualified name uniquely identifies a table space. FOR DATABASE database-alias Used to identify the database whose recovery history file is to be listed. Examples:
db2 list history since 19980201 for sample db2 list history backup containing userspace1 for sample db2 list history dropped table all for db sample
Usage notes: The SYSIBMADM.DB_HISTORY administrative view can be used to retrieves data from all database partitions. The report generated by this command contains the following symbols:
Operation A B C D F G L N O Q R T U X Type Archive Log types: P - Primary log path M - Secondary (mirror) log path N - Archive log command F - Failover archive path Create table space Backup Load copy Dropped table Roll forward Reorganize table Load Rename table space Drop table space Quiesce Restore Alter table space Unload Archive log
536
Command Reference
LIST HISTORY
1 - Primary log archive method 2 - Secondary log archive method Backup types: F N I O D E R Offline Online Incremental offline Incremental online Delta offline Delta online Rebuild
Rollforward types: E - End of logs P - Point in time Load types: I - Insert R - Replace Alter table space types: C - Add containers R - Rebalance Quiesce types: S U X Z Quiesce Quiesce Quiesce Quiesce share update exclusive reset
Related concepts: v Developing a backup and recovery strategy in Data Recovery and High Availability Guide and Reference Related reference: v DB_HISTORY administrative view Retrieve history file information in Administrative SQL Routines and Views
537
Command parameters: WITH PROMPTING Indicates that indoubt transactions are to be processed. If this parameter is specified, an interactive dialog mode is initiated, permitting the user to commit, roll back, or forget indoubt transactions. If this parameter is not specified, indoubt transactions are written to the standard output device, and the interactive dialog mode is not initiated. Interactive dialog mode permits the user to: v List all indoubt transactions (enter l) v List indoubt transaction number x (enter l, followed by a valid transaction number) v Quit (enter q) v Commit transaction number x (enter c, followed by a valid transaction number) v Roll back transaction number x (enter r, followed by a valid transaction number) v Forget transaction number x (enter f, followed by a valid transaction number).
538
Command Reference
Usage notes: An indoubt transaction is a global transaction that was left in an indoubt state. This occurs when either the Transaction Manager (TM) or at least one Resource Manager (RM) becomes unavailable after successfully completing the first phase (that is, the PREPARE phase) of the two-phase commit protocol. The RMs do not know whether to commit or to roll back their branch of the transaction until the TM can consolidate its own log with the indoubt status information from the RMs when they again become available. An indoubt transaction can also exist in an MPP environment. If LIST INDOUBT TRANSACTIONS is issued against the currently connected database, the command returns the information on indoubt transactions in that database. Only transactions whose status is indoubt (i), or missing commit acknowledgment (m), or missing federated commit acknowledgment (d) can be committed. Only transactions whose status is indoubt (i), missing federated rollback acknowledgment (b), or ended (e) can be rolled back. Only transactions whose status is committed (c), rolled back (r), missing federated commit acknowledgment (d), or missing federated rollback acknowledgment (b) can be forgotten. In the commit phase of a two-phase commit, the coordinator node waits for commit acknowledgments. If one or more nodes do not reply (for example, because of node failure), the transaction is placed in missing commit acknowledgment state. Indoubt transaction information is valid only at the time that the command is issued. Once in interactive dialog mode, transaction status might change because of external activities. If this happens, and an attempt is made to process an indoubt transaction which is no longer in an appropriate state, an error message is displayed. After this type of error occurs, the user should quit (q) the interactive dialog and reissue the LIST INDOUBT TRANSACTIONS WITH PROMPTING command to refresh the information shown. Related concepts:
539
540
Command Reference
Command parameters: ADMIN Specifies administration server nodes. SHOW DETAIL Specifies that the output should include the following information: v Remote instance name v System v Operating system type Examples: The following is sample output from LIST NODE DIRECTORY:
Node Directory Number of entries in the directory = 2 Node 1 entry: Node name Comment Directory entry type Protocol Hostname Service name Node 2 entry: Node name Comment Directory entry type Protocol Hostname Service name = = = = = = TLBA10ME LOCAL TCPIP tlba10me 447 = = = = = = LANNODE LDAP TCPIP LAN.db2ntd3.torolab.ibm.com 50000
541
Node 1 entry: Node name Comment Directory entry type Protocol Hostname Service name Node 2 entry: Node name Comment Directory entry type Protocol Hostname Service name = = = = = = MYDB2DAS LDAP TCPIP peng.torolab.ibm.com 523 = = = = = = LOCALADM LOCAL TCPIP jaguar 523
The common fields are identified as follows: Node name The name of the remote node. This corresponds to the name entered for the nodename parameter when the node was cataloged. Comment A comment associated with the node, entered when the node was cataloged. To change a comment in the node directory, uncatalog the node, and then catalog it again with the new comment. Directory entry type LOCAL means the entry is found in the local node directory file. LDAP means the entry is found on the LDAP server or LDAP cache. Protocol The communications protocol cataloged for the node. For information about fields associated with a specific node type, see the applicable CATALOG...NODE command. Usage notes: A node directory is created and maintained on each database client. It contains an entry for each remote workstation having databases that the client can access. The DB2 client uses the communication end point information in the node directory whenever a database connection or instance attachment is requested. The database manager creates a node entry and adds it to the node directory each time it processes a CATALOG...NODE command. The entries can vary, depending on the communications protocol being used by the node. The node directory can contain entries for the following types of nodes: v LDAP v Local v Named pipe v TCPIP v TCPIP4 v TCPIP6
542
Command Reference
543
Command parameters: USER List only user ODBC data sources. This is the default if no keyword is specified. SYSTEM List only system ODBC data sources. Examples: The following is sample output from the LIST ODBC DATA SOURCES command:
User ODBC Data Sources Data source name Description -------------------------------- ---------------------------------------SAMPLE IBM DB2 ODBC DRIVER
Related reference: v CATALOG ODBC DATA SOURCE on page 386 v UNCATALOG ODBC DATA SOURCE on page 753
544
Command Reference
LIST PACKAGES/TABLES
LIST PACKAGES/TABLES
Lists packages or tables associated with the current database. Authorization: For the system catalog SYSCAT.PACKAGES (LIST PACKAGES) and SYSCAT.TABLES (LIST TABLES), one of the following is required: v sysadm or dbadm authority v CONTROL privilege v SELECT privilege. Required connection: Database. If implicit connect is enabled, a connection to the default database is established. Command syntax:
LIST PACKAGES TABLES FOR
SHOW DETAIL
Command parameters: FOR If the FOR clause is not specified, the packages or tables for USER are listed. ALL Lists all packages or tables in the database.
SCHEMA Lists all packages or tables in the database for the specified schema only. SYSTEM Lists all system packages or tables in the database. USER Lists all user packages or tables in the database for the current user. SHOW DETAIL If this option is chosen with the LIST TABLES command, the full table name and schema name are displayed. If this option is not specified, the table name is truncated to 30 characters, and the > symbol in the 31st column represents the truncated portion of the table name; the schema name is truncated to 14 characters and the > symbol in the 15th column represents the truncated portion of the schema name. If this option is chosen with the LIST PACKAGES command, the full package schema (creator), version and boundby authid are displayed, and the package unique_id (consistency token shown in hexadecimal form). If this option is not specified, the schema name and bound by ID are truncated to 8 characters and the > symbol in the 9th column represents the truncated portion of the schema or bound by ID; the version is truncated to 10 characters and the > symbol in the 11th column represents the truncated portion of the version.
545
LIST PACKAGES/TABLES
Examples: The following is sample output from LIST PACKAGES:
Package ---------F4INS F4INS F4INS F4INS PKG12 PKG15 SALARY Schema --------USERA USERA USERA USERA USERA USERA USERT Version ---------VER1 VER2.0 VER2.3 VER2.5 YEAR2000 Bound Total by sections --------- -----------SNOWBELL 221 SNOWBELL 201 SNOWBELL 201 SNOWBELL 201 USERA 12 USERA 42 USERT 15 Valid -----Y Y N Y Y Y Y Format ------0 0 3 0 3 3 3 Isolation level --------CS RS CS CS RR RR CS Blocking -------U U U U B B N
9 record(s) selected.
Usage notes: LIST PACKAGES and LIST TABLES commands are available to provide a quick interface to the system tables. The following SELECT statements return information found in the system tables. They can be expanded to select the additional information that the system tables provide.
select tabname, tabschema, type, create_time from syscat.tables order by tabschema, tabname; select pkgname, pkgschema, pkgversion, unique_id, boundby, total_sect, valid, format, isolation, blocking from syscat.packages order by pkgschema, pkgname, pkgversion; select tabname, tabschema, type, create_time from syscat.tables where tabschema = SYSCAT order by tabschema, tabname; select pkgname, pkgschema, pkgversion, unique_id, boundby, total_sect, valid, format, isolation, blocking from syscat.packages where pkgschema = NULLID order by pkgschema, pkgname, pkgversion; select tabname, tabschema, type, create_time from syscat.tables where tabschema = USER order by tabschema, tabname;
546
Command Reference
LIST PACKAGES/TABLES
select pkgname, pkgschema, pkgversion, unique_id, boundby, total_sect, valid, format, isolation, blocking from syscat.packages where pkgschema = USER order by pkgschema, pkgname, pkgversion;
Related concepts: v Catalog views in SQL Reference, Volume 1 v Efficient SELECT statements in Performance Guide
547
Command parameters: FOR tablespace-id An integer that uniquely represents a table space used by the current database. To get a list of all the table spaces used by the current database, use the LIST TABLESPACES command. SHOW DETAIL If this option is not specified, only the following basic information about each container is provided: v Container ID v Name v Type (file, disk, or path). If this option is specified, the following additional information about each container is provided: v Total number of pages v Number of useable pages v Accessible (yes or no). Examples: The following is sample output from LIST TABLESPACE CONTAINERS:
Tablespace Containers for Tablespace 0 Container ID = 0
548
Command Reference
The following is sample output from LIST TABLESPACE CONTAINERS with SHOW DETAIL specified:
Tablespace Containers for Tablespace 0 Container ID Name Type Total pages Useable pages Accessible = 0 = /home/smith/smith/NODE0000/ SQL00001/SQLT0000.0 = Path = 895 = 895 = Yes
Related concepts: v Snapshot monitor in System Monitor Guide and Reference Related reference: v sqlbtcq API - Get the query data for all table space containers in Administrative API Reference v LIST TABLESPACES on page 550 v CONTAINER_UTILIZATION administrative view Retrieve table space container and utilization information in Administrative SQL Routines and Views v TBSP_UTILIZATION administrative view Retrieve table space configuration and utilization information in Administrative SQL Routines and Views
549
LIST TABLESPACES
LIST TABLESPACES
Lists table spaces for the current database. Information displayed by this command is also available in the table space snapshot. Scope: This command returns information only for the node on which it is executed. Authorization: One of the following: v sysadm v sysctrl v sysmaint v dbadm v load Required connection: Database Command syntax:
LIST TABLESPACES SHOW DETAIL
Command parameters: SHOW DETAIL If this option is not specified, only the following basic information about each table space is provided: v Table space ID v Name v Type (system managed space or database managed space) v Contents (any data, long or index data, or temporary data) v State, a hexadecimal value indicating the current table space state. The externally visible state of a table space is composed of the hexadecimal sum of certain state values. For example, if the state is quiesced: EXCLUSIVE and Load pending, the value is 0x0004 + 0x0008, which is 0x000c. The db2tbst (Get Tablespace State) command can be used to obtain the table space state associated with a given hexadecimal value. Following are the bit definitions listed in sqlutil.h:
0x0 0x1 0x2 0x4 0x8 0x10 0x20 0x40 0x80 0x100 Normal Quiesced: SHARE Quiesced: UPDATE Quiesced: EXCLUSIVE Load pending Delete pending Backup pending Roll forward in progress Roll forward pending Restore pending
550
Command Reference
LIST TABLESPACES
0x100 0x200 0x400 0x800 0x1000 0x2000 0x4000 0x8000 0x2000000 0x4000000 0x8000000 0x10000000 0x20000000 0x40000000 0x8 Recovery pending (not used) Disable pending Reorg in progress Backup in progress Storage must be defined Restore in progress Offline and not accessible Drop pending Storage may be defined StorDef is in final state StorDef was changed prior to rollforward DMS rebalancer is active TBS deletion in progress TBS creation in progress For service use only
If this option is specified, the following additional information about each table space is provided: Total number of pages Number of usable pages Number of used pages Number of free pages High water mark (in pages) Page size (in bytes) Extent size (in pages) Prefetch size (in pages) Number of containers Minimum recovery time (displayed only if not zero) State change table space ID (displayed only if the table space state is load pending or delete pending) v State change object ID (displayed only if the table space state is load pending or delete pending) v Number of quiescers (displayed only if the table space state is quiesced: SHARE, quiesced: UPDATE, or quiesced: EXCLUSIVE) v Table space ID and object ID for each quiescer (displayed only if the number of quiescers is greater than zero). v v v v v v v v v v v Examples: The following are two sample outputs from LIST TABLESPACES SHOW DETAIL.
Tablespaces for Current Database Tablespace ID = 0 Name = SYSCATSPACE Type = Database managed space Contents = Any data State = 0x0000 Detailed explanation: Normal Total pages = 895 Useable pages = 895 Used pages = 895 Free pages = Not applicable High water mark (pages) = Not applicable Page size (bytes) = 4096 Extent size (pages) = 32 Prefetch size (pages) = 32 Number of containers = 1
Chapter 3. CLP Commands
551
LIST TABLESPACES
Tablespace ID = 1 Name = TEMPSPACE1 Type = System managed space Contents = Temporary data State = 0x0000 Detailed explanation: Normal Total pages = 1 Useable pages = 1 Used pages = 1 Free pages = Not applicable High water mark (pages) = Not applicable Page size (bytes) = 4096 Extent size (pages) = 32 Prefetch size (pages) = 32 Number of containers = 1 Tablespace ID = 2 Name = USERSPACE1 Type = Database managed space Contents = Any data State = 0x000c Detailed explanation: Quiesced: EXCLUSIVE Load pending Total pages = 337 Useable pages = 337 Used pages = 337 Free pages = Not applicable High water mark (pages) = Not applicable Page size (bytes) = 4096 Extent size (pages) = 32 Prefetch size (pages) = 32 Number of containers = 1 State change tablespace ID = 2 State change object ID = 3 Number of quiescers = 1 Quiescer 1: Tablespace ID = 2 Object ID = 3 DB21011I In a partitioned database server environment, only the table spaces on the current node are listed. Tablespaces for Current Database Tablespace ID = 0 Name = SYSCATSPACE Type = System managed space Contents = Any data State = 0x0000 Detailed explanation: Normal Total pages = 1200 Useable pages = 1200 Used pages = 1200 Free pages = Not applicable High water mark (pages) = Not applicable Page size (bytes) = 4096 Extent size (pages) = 32 Prefetch size (pages) = 32 Number of containers = 1 Tablespace ID = 1 Name = TEMPSPACE1 Type = System managed space Contents = Temporary data State = 0x0000 Detailed explanation: Normal Total pages = 1
552
Command Reference
LIST TABLESPACES
Useable pages = 1 Used pages = 1 Free pages = Not applicable High water mark (pages) = Not applicable Page size (bytes) = 4096 Extent size (pages) = 32 Prefetch size (pages) = 32 Number of containers = 1 Tablespace ID = 2 Name = USERSPACE1 Type = System managed space Contents = Any data State = 0x0000 Detailed explanation: Normal Total pages = 1 Useable pages = 1 Used pages = 1 Free pages = Not applicable High water mark (pages) = Not applicable Page size (bytes) = 4096 Extent size (pages) = 32 Prefetch size (pages) = 32 Number of containers = 1 Tablespace ID = 3 Name = DMS8K Type = Database managed space Contents = Any data State = 0x0000 Detailed explanation: Normal Total pages = 2000 Useable pages = 1952 Used pages = 96 Free pages = 1856 High water mark (pages) = 96 Page size (bytes) = 8192 Extent size (pages) = 32 Prefetch size (pages) = 32 Number of containers = 2 Tablespace ID = 4 Name = TEMP8K Type = System managed space Contents = Temporary data State = 0x0000 Detailed explanation: Normal Total pages = 1 Useable pages = 1 Used pages = 1 Free pages = Not applicable High water mark (pages) = Not applicable Page size (bytes) = 8192 Extent size (pages) = 32 Prefetch size (pages) = 32 Number of containers = 1 DB21011I In a partitioned database server environment, only the table spaces on the current node are listed.
Usage notes: In a partitioned database environment, this command does not return all the table spaces in the database. To obtain a list of all the table spaces, query SYSCAT.TABLESPACES.
553
LIST TABLESPACES
During a table space rebalance, the number of usable pages includes pages for the newly added container, but these new pages are not reflected in the number of free pages until the rebalance is complete. When a table space rebalance is not in progress, the number of used pages plus the number of free pages equals the number of usable pages. Related reference: v sqlbmtsq API - Get the query data for all table spaces in Administrative API Reference v LIST TABLESPACE CONTAINERS on page 548 v db2tbst - Get table space state on page 285 v CONTAINER_UTILIZATION administrative view Retrieve table space container and utilization information in Administrative SQL Routines and Views v TBSP_UTILIZATION administrative view Retrieve table space configuration and utilization information in Administrative SQL Routines and Views
554
Command Reference
LIST UTILITIES
LIST UTILITIES
Displays to standard output the list of active utilities on the instance. The description of each utility can include attributes such as start time, description, throttling priority (if applicable), as well as progress monitoring information (if applicable). Scope: This command only returns information for the database partition on which it is issued. Authorization: One of the following: v sysadm v sysctrl v sysmaint Required connection: Instance. Command syntax:
LIST UTILITIES SHOW DETAIL
Command parameters: SHOW DETAIL Displays detailed progress information for utilities that support progress monitoring. Examples: A RUNSTATS invocation on table some_table:
LIST UTILITIES ID Type Database Name Description Start Time Priority = = = = = = 1 RUNSTATS PROD krrose.some_table 12/19/2003 11:54:45.773215 10
= 1 =
Chapter 3. CLP Commands
555
LIST UTILITIES
Work Metric Total Work Units Completed Work Units Start Time = = = = BYTES 20232453 230637 10/30/2003 12:55:31.786115
Usage notes: Use this command to monitor the status of running utilities. For example, you might use this utility to monitor the progress of an online backup. In another example, you might investigate a performance problem by using this command to determine which utilities are running. If the utility is suspected to be responsible for degrading performance then you might elect to throttle the utility (if the utility supports throttling). The ID from the LIST UTILITIES command is the same ID used in the SET UTIL_IMPACT_PRIORITY command. Related reference: v SET UTIL_IMPACT_PRIORITY on page 724 v SNAPUTIL administrative view and SNAP_GET_UTIL table function Retrieve utility_info logical data group snapshot information in Administrative SQL Routines and Views v SNAPUTIL_PROGRESS administrative view and SNAP_GET_UTIL_PROGRESS table function Retrieve progress logical data group snapshot information in Administrative SQL Routines and Views
556
Command Reference
LOAD
LOAD
Loads data into a DB2 table. Data residing on the server can be in the form of a file, tape, or named pipe. Data residing on a remotely connected client can be in the form of a fully qualified file or named pipe. Data can also be loaded from a user-defined cursor or by using a user-written script or application. If the COMPRESS attribute for the table is set to YES, the data loaded will be subject to compression on every data and database partition for which a dictionary already exists in the table. Restrictions: The load utility does not support loading data at the hierarchy level. The load utility is not compatible with range-clustered tables. Scope: This command can be issued against multiple database partitions in a single request. Authorization: One of the following: v sysadm v dbadm v load authority on the database and INSERT privilege on the table when the load utility is invoked in INSERT mode, TERMINATE mode (to terminate a previous load insert operation), or RESTART mode (to restart a previous load insert operation) INSERT and DELETE privilege on the table when the load utility is invoked in REPLACE mode, TERMINATE mode (to terminate a previous load replace operation), or RESTART mode (to restart a previous load replace operation) INSERT privilege on the exception table, if such a table is used as part of the load operation. v To load data into a table that has protected columns, the session authorization ID must have LBAC credentials that allow write access to all protected columns in the table. Otherwise the load fails and an error (SQLSTATE 5U014) is returned. v To load data into a table that has protected rows, the session authorization id must hold a security label that meets these criteria: It is part of the security policy protecting the table It was granted to the session authorization ID for write access or for all access If the session authorization id does not hold such a security label then the load fails and an error (SQLSTATE 5U014) is returned. This security label is used to protect a loaded row if the session authorization ID's LBAC credentials do not allow it to write to the security label that protects that row in the data. This does not happen, however, when the security policy protecting the table was created with the RESTRICT NOT AUTHORIZED WRITE SECURITY LABEL option of the CREATE SECURITY POLICY statement. In this case the load fails and an error (SQLSTATE 42519) is returned. v If the REPLACE option is specified, the session authorization ID must have the authority to drop the table.
Chapter 3. CLP Commands
557
LOAD
Since all load processes (and all DB2 server processes, in general) are owned by the instance owner, and all of these processes use the identification of the instance owner to access needed files, the instance owner must have read access to input data files. These input data files must be readable by the instance owner, regardless of who invokes the command. Required connection: Database. If implicit connect is enabled, a connection to the default database is established. Instance. An explicit attachment is not required. If a connection to the database has been established, an implicit attachment to the local instance is attempted. Command syntax:
, LOAD CLIENT FROM filename pipename device cursorname OF filetype , LOBS FROM lob-path MODIFIED BY filetype-mod
SAVECOUNT n
ROWCOUNT n
column-position
WARNINGCOUNT n
MESSAGES message-file
STATISTICS
USE PROFILE NO
COPY
NO YES
WITHOUT PROMPTING USE TSM OPEN num-sess SESSIONS , TO device/directory LOAD lib-name OPEN num-sess SESSIONS
NONRECOVERABLE
CPU_PARALLELISM n
DISK_PARALLELISM n FETCH_PARALLELISM
YES NO
INDEXING MODE
ALLOW NO ACCESS ALLOW READ ACCESS USE tablespace-name SET INTEGRITY PENDING CASCADE IMMEDIATE DEFERRED LOCK WITH FORCE
SOURCEUSEREXIT executable REDIRECT INPUT FROM BUFFER input-buffer FILE input-file OUTPUT TO FILE output-file PARALLELIZE OUTPUT TO FILE output-file
558
Command Reference
LOAD
Notes: 1 2 These keywords can appear in any order. Each of these keywords can only appear once.
Command parameters: CLIENT Specifies that the data to be loaded resides on a remotely connected client. This option is ignored if the load operation is not being invoked from a remote client. This option is ignored if specified in conjunction with the CURSOR filetype. Notes: 1. The dumpfile and lobsinfile modifiers refer to files on the server even when the CLIENT keyword is specified. 2. Code page conversion is not performed during a remote load operation. If the code page of the data is different from that of the server, the data code page should be specified using the codepage modifier. In the following example, a data file (/u/user/data.del) residing on a remotely connected client is to be loaded into MYTABLE on the server database:
db2 load client from /u/user/data.del of del modified by codepage=850 insert into mytable
FROM filename/pipename/device/cursorname Specifies the file, pipe, device, or cursor referring to an SQL statement that contains the data being loaded. If the input source is a file, pipe, or device, it must reside on the database partition where the database resides, unless the CLIENT option is specified. If several names are specified, they will be processed in sequence. If the last item specified is a tape device, the user is prompted for another tape. Valid response options are: c d t Continue. Continue using the device that generated the warning message (for example, when a new tape has been mounted). Device terminate. Stop using the device that generated the warning message (for example, when there are no more tapes). Terminate. Terminate all devices.
Notes: 1. It is recommended that the fully qualified file name be used. If the server is remote, the fully qualified file name must be used. If the database resides on the same database partition as the caller, relative paths can be used. 2. If data is exported into a file using the EXPORT command using the ADMIN_CMD procedure, the data file is owned by the fenced user ID. This file is not usually accessible by the instance owner. To run the
559
LOAD
LOAD from CLP or the ADMIN_CMD procedure, the data file must be accessible by the instance owner ID, so read access to the data file must be granted to the instance owner. 3. Loading data from multiple IXF files is supported if the files are physically separate, but logically one file. It is not supported if the files are both logically and physically separate. (Multiple physical files would be considered logically one if they were all created with one invocation of the EXPORT command.) 4. If loading data that resides on a client machine, the data must be in the form of either a fully qualified file or a named pipe. OF filetype Specifies the format of the data: v ASC (non-delimited ASCII format) v DEL (delimited ASCII format) v IXF (integrated exchange format, PC version), exported from the same or from another DB2 table v CURSOR (a cursor declared against a SELECT or VALUES statement). LOBS FROM lob-path The path to the data files containing LOB values to be loaded. The path must end with a slash (/). If the CLIENT option is specified, the path must be fully qualified. The names of the LOB data files are stored in the main data file (ASC, DEL, or IXF), in the column that will be loaded into the LOB column. The maximum number of paths that can be specified is 999. This will implicitly activate the LOBSINFILE behaviour. This option is ignored when specified in conjunction with the CURSOR filetype. MODIFIED BY filetype-mod Specifies file type modifier options. See File type modifiers for the load utility. METHOD L Specifies the start and end column numbers from which to load data. A column number is a byte offset from the beginning of a row of data. It is numbered starting from 1. This method can only be used with ASC files, and is the only valid method for that file type. NULL INDICATORS null-indicator-list This option can only be used when the METHOD L parameter is specified; that is, the input file is an ASC file). The null indicator list is a comma-separated list of positive integers specifying the column number of each null indicator field. The column number is the byte offset of the null indicator field from the beginning of a row of data. There must be one entry in the null indicator list for each data field defined in the METHOD L parameter. A column number of zero indicates that the corresponding data field always contains data. A value of Y in the NULL indicator column specifies that the column data is NULL. Any character other than Y in the NULL indicator column specifies that the column data
560
Command Reference
LOAD
is not NULL, and that column data specified by the METHOD L option will be loaded. The NULL indicator character can be changed using the MODIFIED BY option. N Specifies the names of the columns in the data file to be loaded. The case of these column names must match the case of the corresponding names in the system catalogs. Each table column that is not nullable should have a corresponding entry in the METHOD N list. For example, given data fields F1, F2, F3, F4, F5, and F6, and table columns C1 INT, C2 INT NOT NULL, C3 INT NOT NULL, and C4 INT, method N (F2, F1, F4, F3) is a valid request, while method N (F2, F1) is not valid. This method can only be used with file types IXF or CURSOR. Specifies the field numbers (numbered from 1) of the input data fields to be loaded. Each table column that is not nullable should have a corresponding entry in the METHOD P list. For example, given data fields F1, F2, F3, F4, F5, and F6, and table columns C1 INT, C2 INT NOT NULL, C3 INT NOT NULL, and C4 INT, method P (2, 1, 4, 3) is a valid request, while method P (2, 1) is not valid. This method can only be used with file types IXF, DEL, or CURSOR, and is the only valid method for the DEL file type.
SAVECOUNT n Specifies that the load utility is to establish consistency points after every n rows. This value is converted to a page count, and rounded up to intervals of the extent size. Since a message is issued at each consistency point, this option should be selected if the load operation will be monitored using LOAD QUERY. If the value of n is not sufficiently high, the synchronization of activities performed at each consistency point will impact performance. The default value is zero, meaning that no consistency points will be established, unless necessary. This option is ignored when specified in conjunction with the CURSOR filetype. ROWCOUNT n Specifies the number of n physical records in the file to be loaded. Allows a user to load only the first n rows in a file. WARNINGCOUNT n Stops the load operation after n warnings. Set this parameter if no warnings are expected, but verification that the correct file and table are being used is desired. If the load file or the target table is specified incorrectly, the load utility will generate a warning for each row that it attempts to load, which will cause the load to fail. If n is zero, or this option is not specified, the load operation will continue regardless of the number of warnings issued. If the load operation is stopped because the threshold of warnings was encountered, another load operation can be started in RESTART mode. The load operation will automatically continue from the last consistency point. Alternatively, another load operation can be initiated in REPLACE mode, starting at the beginning of the input file. MESSAGES message-file Specifies the destination for warning and error messages that occur during the load operation. If a message file is not specified, messages are written
Chapter 3. CLP Commands
561
LOAD
to standard output. If the complete path to the file is not specified, the load utility uses the current directory and the default drive as the destination. If the name of a file that already exists is specified, the utility appends the information. The message file is usually populated with messages at the end of the load operation and, as such, is not suitable for monitoring the progress of the operation. TEMPFILES PATH temp-pathname Specifies the name of the path to be used when creating temporary files during a load operation, and should be fully qualified according to the server database partition. Temporary files take up file system space. Sometimes, this space requirement is quite substantial. Following is an estimate of how much file system space should be allocated for all temporary files: v 136 bytes for each message that the load utility generates v 15KB overhead if the data file contains long field data or LOBs. This quantity can grow significantly if the INSERT option is specified, and there is a large amount of long field or LOB data already in the table. INSERT One of four modes under which the load utility can execute. Adds the loaded data to the table without changing the existing table data. REPLACE One of four modes under which the load utility can execute. Deletes all existing data from the table, and inserts the loaded data. The table definition and index definitions are not changed. If this option is used when moving data between hierarchies, only the data for an entire hierarchy, not individual subtables, can be replaced. RESTART One of four modes under which the load utility can execute. Restarts a previously interrupted load operation. The load operation will automatically continue from the last consistency point in the load, build, or delete phase. TERMINATE One of four modes under which the load utility can execute. Terminates a previously interrupted load operation, and rolls back the operation to the point in time at which it started, even if consistency points were passed. The states of any table spaces involved in the operation return to normal, and all table objects are made consistent (index objects might be marked as invalid, in which case index rebuild will automatically take place at next access). If the load operation being terminated is a load REPLACE, the table will be truncated to an empty table after the load TERMINATE operation. If the load operation being terminated is a load INSERT, the table will retain all of its original records after the load TERMINATE operation. The load terminate option will not remove a backup pending state from table spaces. INTO table-name Specifies the database table into which the data is to be loaded. This table cannot be a system table or a declared temporary table. An alias, or the fully qualified or unqualified table name can be specified. A qualified table
562
Command Reference
LOAD
name is in the form schema.tablename. If an unqualified table name is specified, the table will be qualified with the CURRENT SCHEMA. insert-column Specifies the table column into which the data is to be inserted. The load utility cannot parse columns whose names contain one or more spaces. For example,
db2 load from delfile1 of del noheader method P (1, 2, 3, 4, 5, 6, 7, 8, 9) insert into table1 (BLOB1, S2, I3, Int 4, I5, I6, DT7, I8, TM9)
will fail because of the Int 4 column. The solution is to enclose such column names with double quotation marks:
db2 load from delfile1 of del noheader method P (1, 2, 3, 4, 5, 6, 7, 8, 9) insert into table1 (BLOB1, S2, I3, "Int 4", I5, I6, DT7, I8, TM9)
FOR EXCEPTION table-name Specifies the exception table into which rows in error will be copied. Any row that is in violation of a unique index or a primary key index is copied. If an unqualified table name is specified, the table will be qualified with the CURRENT SCHEMA. Information that is written to the exception table is not written to the dump file. In a partitioned database environment, an exception table must be defined for those database partitions on which the loading table is defined. The dump file, on the other hand, contains rows that cannot be loaded because they are invalid or have syntax errors. NORANGEEXC Indicates that if a row is rejected because of a range violation it will not be inserted into the exception table. NOUNIQUEEXC Indicates that if a row is rejected because it violates a unique constraint it will not be inserted into the exception table. STATISTICS USE PROFILE Instructs load to collect statistics during the load according to the profile defined for this table. This profile must be created before load is executed. The profile is created by the RUNSTATS command. If the profile does not exist and load is instructed to collect statistics according to the profile, a warning is returned and no statistics are collected. STATISTICS NO Specifies that no statistics are to be collected, and that the statistics in the catalogs are not to be altered. This is the default. COPY NO Specifies that the table space in which the table resides will be placed in backup pending state if forward recovery is enabled (that is, logretain or userexit is on). The COPY NO option will also put the table space state into the Load in Progress table space state. This is a transient state that will disappear when the load completes or aborts. The data in any table in the table space cannot be updated or deleted until a table space backup or a full database backup is made. However, it is possible to access the data in any table by using the SELECT statement. LOAD with COPY NO on a recoverable database leaves the table spaces in a backup pending state. For example, performing a LOAD with COPY NO
Chapter 3. CLP Commands
563
LOAD
and INDEXING MODE DEFERRED will leave indexes needing a refresh. Certain queries on the table might require an index scan and will not succeed until the indexes are refreshed. The index cannot be refreshed if it resides in a table space which is in the backup pending state. In that case, access to the table will not be allowed until a backup is taken. Index refresh is done automatically by the database when the index is accessed by a query. COPY YES Specifies that a copy of the loaded data will be saved. This option is invalid if forward recovery is disabled (both logretain and userexit are off). USE TSM Specifies that the copy will be stored using Tivoli Storage Manager (TSM). OPEN num-sess SESSIONS The number of I/O sessions to be used with TSM or the vendor product. The default value is 1. TO device/directory Specifies the device or directory on which the copy image will be created. LOAD lib-name The name of the shared library (DLL on Windows operating systems) containing the vendor backup and restore I/O functions to be used. It can contain the full path. If the full path is not given, it will default to the path where the user exit programs reside. NONRECOVERABLE Specifies that the load transaction is to be marked as non-recoverable and that it will not be possible to recover it by a subsequent roll forward action. The roll forward utility will skip the transaction and will mark the table into which data was being loaded as "invalid". The utility will also ignore any subsequent transactions against that table. After the roll forward operation is completed, such a table can only be dropped or restored from a backup (full or table space) taken after a commit point following the completion of the non-recoverable load operation. With this option, table spaces are not put in backup pending state following the load operation, and a copy of the loaded data does not have to be made during the load operation. WITHOUT PROMPTING Specifies that the list of data files contains all the files that are to be loaded, and that the devices or directories listed are sufficient for the entire load operation. If a continuation input file is not found, or the copy targets are filled before the load operation finishes, the load operation will fail, and the table will remain in load pending state. If this option is not specified, and the tape device encounters an end of tape for the copy image, or the last item listed is a tape device, the user is prompted for a new tape on that device. DATA BUFFER buffer-size Specifies the number of 4KB pages (regardless of the degree of parallelism) to use as buffered space for transferring data within the utility. If the value specified is less than the algorithmic minimum, the minimum required resource is used, and no warning is returned.
564
Command Reference
LOAD
This memory is allocated directly from the utility heap, whose size can be modified through the util_heap_sz database configuration parameter. If a value is not specified, an intelligent default is calculated by the utility at run time. The default is based on a percentage of the free space available in the utility heap at the instantiation time of the loader, as well as some characteristics of the table. SORT BUFFER buffer-size This option specifies a value that overrides the SORTHEAP database configuration parameter during a load operation. It is relevant only when loading tables with indexes and only when the INDEXING MODE parameter is not specified as DEFERRED. The value that is specified cannot exceed the value of SORTHEAP. This parameter is useful for throttling the sort memory that is used when loading tables with many indexes without changing the value of SORTHEAP, which would also affect general query processing. CPU_PARALLELISM n Specifies the number of processes or threads that the load utility will spawn for parsing, converting, and formatting records when building table objects. This parameter is designed to exploit intra-partition parallelism. It is particularly useful when loading presorted data, because record order in the source data is preserved. If the value of this parameter is zero, or has not been specified, the load utility uses an intelligent default value (usually based on the number of CPUs available) at run time. Notes: 1. If this parameter is used with tables containing either LOB or LONG VARCHAR fields, its value becomes one, regardless of the number of system CPUs or the value specified by the user. 2. Specifying a small value for the SAVECOUNT parameter causes the loader to perform many more I/O operations to flush both data and table metadata. When CPU_PARALLELISM is greater than one, the flushing operations are asynchronous, permitting the loader to exploit the CPU. When CPU_PARALLELISM is set to one, the loader waits on I/O during consistency points. A load operation with CPU_PARALLELISM set to two, and SAVECOUNT set to 10 000, completes faster than the same operation with CPU_PARALLELISM set to one, even though there is only one CPU. DISK_PARALLELISM n Specifies the number of processes or threads that the load utility will spawn for writing data to the table space containers. If a value is not specified, the utility selects an intelligent default based on the number of table space containers and the characteristics of the table. FETCH_PARALLELISM YES/NO When performing a load from a cursor where the cursor is declared using the DATABASE keyword, or when using the API sqlu_remotefetch_entry media entry, and this option is set to YES, the load utility attempts to parallelize fetching from the remote data source if possible. If set to NO, no parallel fetching is performed. The default value is YES. For more information, see Moving data using the CURSOR file type. INDEXING MODE Specifies whether the load utility is to rebuild indexes or to extend them incrementally. Valid values are:
565
LOAD
AUTOSELECT The load utility will automatically decide between REBUILD or INCREMENTAL mode. The decision is based on the amount of data being loaded and the depth of the index tree. Information relating to the depth of the index tree is stored in the index object. RUNSTATS is not required to populate this information. AUTOSELECT is the default indexing mode. REBUILD All indexes will be rebuilt. The utility must have sufficient resources to sort all index key parts for both old and appended table data. INCREMENTAL Indexes will be extended with new data. This approach consumes index free space. It only requires enough sort space to append index keys for the inserted records. This method is only supported in cases where the index object is valid and accessible at the start of a load operation (it is, for example, not valid immediately following a load operation in which the DEFERRED mode was specified). If this mode is specified, but not supported due to the state of the index, a warning is returned, and the load operation continues in REBUILD mode. Similarly, if a load restart operation is begun in the load build phase, INCREMENTAL mode is not supported. Incremental indexing is not supported when all of the following conditions are true: v The LOAD COPY option is specified (logarchmeth1 with the USEREXIT or LOGRETAIN option). v The table resides in a DMS table space. v The index object resides in a table space that is shared by other table objects belonging to the table being loaded. To bypass this restriction, it is recommended that indexes be placed in a separate table space. DEFERRED The load utility will not attempt index creation if this mode is specified. Indexes will be marked as needing a refresh. The first access to such indexes that is unrelated to a load operation might force a rebuild, or indexes might be rebuilt when the database is restarted. This approach requires enough sort space for all key parts for the largest index. The total time subsequently taken for index construction is longer than that required in REBUILD mode. Therefore, when performing multiple load operations with deferred indexing, it is advisable (from a performance viewpoint) to let the last load operation in the sequence perform an index rebuild, rather than allow indexes to be rebuilt at first non-load access. Deferred indexing is only supported for tables with non-unique indexes, so that duplicate keys inserted during the load phase are not persistent after the load operation. ALLOW NO ACCESS Load will lock the target table for exclusive access during the load. The table state will be set to Load In Progress during the load. ALLOW NO ACCESS is the default behavior. It is the only valid option for LOAD REPLACE.
566
Command Reference
LOAD
When there are constraints on the table, the table state will be set to Set Integrity Pending as well as Load In Progress. The SET INTEGRITY statement must be used to take the table out of Set Integrity Pending state. ALLOW READ ACCESS Load will lock the target table in a share mode. The table state will be set to both Load In Progress and Read Access. Readers can access the non-delta portion of the data while the table is being load. In other words, data that existed before the start of the load will be accessible by readers to the table, data that is being loaded is not available until the load is complete. LOAD TERMINATE or LOAD RESTART of an ALLOW READ ACCESS load can use this option; LOAD TERMINATE or LOAD RESTART of an ALLOW NO ACCESS load cannot use this option. Furthermore, this option is not valid if the indexes on the target table are marked as requiring a rebuild. When there are constraints on the table, the table state will be set to Set Integrity Pending as well as Load In Progress, and Read Access. At the end of the load, the table state Load In Progress will be removed but the table states Set Integrity Pending and Read Access will remain. The SET INTEGRITY statement must be used to take the table out of Set Integrity Pending. While the table is in Set Integrity Pending and Read Access states, the non-delta portion of the data is still accessible to readers, the new (delta) portion of the data will remain inaccessible until the SET INTEGRITY statement has completed. A user can perform multiple loads on the same table without issuing a SET INTEGRITY statement. Only the original (checked) data will remain visible, however, until the SET INTEGRITY statement is issued. ALLOW READ ACCESS also supports the following modifiers: USE tablespace-name If the indexes are being rebuilt, a shadow copy of the index is built in table space tablespace-name and copied over to the original table space at the end of the load during an INDEX COPY PHASE. Only system temporary table spaces can be used with this option. If not specified then the shadow index will be created in the same table space as the index object. If the shadow copy is created in the same table space as the index object, the copy of the shadow index object over the old index object is instantaneous. If the shadow copy is in a different table space from the index object a physical copy is performed. This could involve considerable I/O and time. The copy happens while the table is offline at the end of a load during the INDEX COPY PHASE. Without this option the shadow index is built in the same table space as the original. Since both the original index and shadow index by default reside in the same table space simultaneously, there might be insufficient space to hold both indexes within one table space. Using this option ensures that you retain enough table space for the indexes. This option is ignored if the user does not specify INDEXING MODE REBUILD or INDEXING MODE AUTOSELECT. This option will also be ignored if INDEXING MODE AUTOSELECT is chosen and load chooses to incrementally update the index. SET INTEGRITY PENDING CASCADE If LOAD puts the table into Set Integrity Pending state, the SET
Chapter 3. CLP Commands
567
LOAD
INTEGRITY PENDING CASCADE option allows the user to specify whether or not Set Integrity Pending state of the loaded table is immediately cascaded to all descendents (including descendent foreign key tables, descendent immediate materialized query tables and descendent immediate staging tables). IMMEDIATE Indicates that Set Integrity Pending state is immediately extended to all descendent foreign key tables, descendent immediate materialized query tables and descendent staging tables. For a LOAD INSERT operation, Set Integrity Pending state is not extended to descendent foreign key tables even if the IMMEDIATE option is specified. When the loaded table is later checked for constraint violations (using the IMMEDIATE CHECKED option of the SET INTEGRITY statement), descendent foreign key tables that were placed in Set Integrity Pending Read Access state will be put into Set Integrity Pending No Access state. DEFERRED Indicates that only the loaded table will be placed in the Set Integrity Pending state. The states of the descendent foreign key tables, descendent immediate materialized query tables and descendent immediate staging tables will remain unchanged. Descendent foreign key tables might later be implicitly placed in Set Integrity Pending state when their parent tables are checked for constraint violations (using the IMMEDIATE CHECKED option of the SET INTEGRITY statement). Descendent immediate materialized query tables and descendent immediate staging tables will be implicitly placed in Set Integrity Pending state when one of its underlying tables is checked for integrity violations. A warning (SQLSTATE 01586) will be issued to indicate that dependent tables have been placed in Set Integrity Pending state. See the Notes section of the SET INTEGRITY statement in the SQL Reference for when these descendent tables will be put into Set Integrity Pending state. If the SET INTEGRITY PENDING CASCADE option is not specified: v Only the loaded table will be placed in Set Integrity Pending state. The state of descendent foreign key tables, descendent immediate materialized query tables and descendent immediate staging tables will remain unchanged, and can later be implicitly put into Set Integrity Pending state when the loaded table is checked for constraint violations. If LOAD does not put the target table into Set Integrity Pending state, the SET INTEGRITY PENDING CASCADE option is ignored. LOCK WITH FORCE The utility acquires various locks including table locks in the process of loading. Rather than wait, and possibly timeout, when acquiring a lock, this option allows load to force off other applications that hold conflicting locks on the target table. Applications holding conflicting locks on the system catalog tables will not be forced off by the load utility. Forced applications will roll back and release the locks the load utility needs. The load utility can then proceed. This option requires the same authority as the FORCE APPLICATIONS command (SYSADM or SYSCTRL).
568
Command Reference
LOAD
ALLOW NO ACCESS loads might force applications holding conflicting locks at the start of the load operation. At the start of the load the utility can force applications that are attempting to either query or modify the table. ALLOW READ ACCESS loads can force applications holding conflicting locks at the start or end of the load operation. At the start of the load the load utility can force applications that are attempting to modify the table. At the end of the load operation, the load utility can force applications that are attempting to either query or modify the table. SOURCEUSEREXITexecutable Specifies an executable filename which will be called to feed data into the utility. REDIRECT INPUT FROM BUFFER input-buffer The stream of bytes specified in input-buffer is passed into the STDIN file descriptor of the process executing the given executable. FILE input-file The contents of this client-side file are passed into the STDIN file descriptor of the process executing the given executable. OUTPUT TO FILE output-file The STDOUT and STDERR file descriptors are captured to the fully qualified server-side file specified. PARALLELIZE Increases the throughput of data coming into the load utility by invoking multiple user exit processes simultaneously. This option is only applicable in multi-partition database environments and is ingored in single-partition database enviroments. For more information, see Moving data using a customized application (user exit). PARTITIONED DB CONFIG Allows you to execute a load into a table distributed across multiple database partitions. The PARTITIONED DB CONFIG parameter allows you to specify partitioned database-specific configuration options. The partitioned-db-option values can be any of the following:
PART_FILE_LOCATION x OUTPUT_DBPARTNUMS x PARTITIONING_DBPARTNUMS x MODE x MAX_NUM_PART_AGENTS x ISOLATE_PART_ERRS x STATUS_INTERVAL x PORT_RANGE x CHECK_TRUNCATION MAP_FILE_INPUT x MAP_FILE_OUTPUT x TRACE x
569
LOAD
NEWLINE DISTFILE x OMIT_HEADER RUN_STAT_DBPARTNUM x
Detailed descriptions of these options are provided in Load configuration options for partitioned database environments. RESTARTCOUNT Reserved. USING directory Reserved. Examples: Example 1 TABLE1 has 5 columns: v COL1 VARCHAR 20 NOT NULL WITH DEFAULT v COL2 SMALLINT v COL3 CHAR 4 v COL4 CHAR 2 NOT NULL WITH DEFAULT v COL5 CHAR 2 NOT NULL ASCFILE1 has 6 elements: v ELE1 positions 01 to 20 v ELE2 positions 21 to 22 v ELE5 positions 23 to 23 v ELE3 positions 24 to 27 v ELE4 positions 28 to 31 v ELE6 positions 32 to 32 v ELE6 positions 33 to 40 Data Records:
1...5....10...15...20...25...30...35...40 Test data 1 XXN 123abcdN Test data 2 and 3 QQY wxyzN Test data 4,5 and 6 WWN6789 Y
Notes: 1. The specification of striptblanks in the MODIFIED BY parameter forces the truncation of blanks in VARCHAR columns (COL1, for example, which is 11, 17 and 19 bytes long, in rows 1, 2 and 3, respectively). 2. The specification of reclen=40 in the MODIFIED BY parameter indicates that there is no new-line character at the end of each input record, and that each record is 40 bytes long. The last 8 bytes are not used to load the table. 3. Since COL4 is not provided in the input file, it will be inserted into TABLE1 with its default value (it is defined NOT NULL WITH DEFAULT).
570
Command Reference
LOAD
4. Positions 23 and 32 are used to indicate whether COL2 and COL3 of TABLE1 will be loaded NULL for a given row. If there is a Y in the columns null indicator position for a given record, the column will be NULL. If there is an N, the data values in the columns data positions of the input record (as defined in L(........)) are used as the source of column data for the row. In this example, neither column in row 1 is NULL; COL2 in row 2 is NULL; and COL3 in row 3 is NULL. 5. In this example, the NULL INDICATORS for COL1 and COL5 are specified as 0 (zero), indicating that the data is not nullable. 6. The NULL INDICATOR for a given column can be anywhere in the input record, but the position must be specified, and the Y or N values must be supplied. Example 2 (Loading LOBs from Files) TABLE1 has 3 columns: v COL1 CHAR 4 NOT NULL WITH DEFAULT v LOB1 LOB v LOB2 LOB ASCFILE1 has 3 elements: v ELE1 positions 01 to 04 v ELE2 positions 06 to 13 v ELE3 positions 15 to 22 The following files reside in either /u/user1 or /u/user1/bin: v ASCFILE2 has LOB data v ASCFILE3 has LOB data v v v v ASCFILE4 ASCFILE5 ASCFILE6 ASCFILE7 has has has has LOB LOB LOB LOB data data data data
Notes: 1. The specification of lobsinfile in the MODIFIED BY parameter tells the loader that all LOB data is to be loaded from files. 2. The specification of reclen=22 in the MODIFIED BY parameter indicates that there is no new-line character at the end of each input record, and that each record is 22 bytes long.
571
LOAD
3. LOB data is contained in 6 files, ASCFILE2 through ASCFILE7. Each file contains the data that will be used to load a LOB column for a specific row. The relationship between LOBs and other data is specified in ASCFILE1. The first record of this file tells the loader to place REC1 in COL1 of row 1. The contents of ASCFILE2 will be used to load LOB1 of row 1, and the contents of ASCFILE3 will be used to load LOB2 of row 1. Similarly, ASCFILE4 and ASCFILE5 will be used to load LOB1 and LOB2 of row 2, and ASCFILE6 and ASCFILE7 will be used to load the LOBs of row 3. 4. The LOBS FROM parameter contains 2 paths that will be searched for the named LOB files when those files are required by the loader. 5. To load LOBs directly from ASCFILE1 (a non-delimited ASCII file), without the lobsinfile modifier, the following rules must be observed: v The total length of any record, including LOBs, cannot exceed 32KB. v LOB fields in the input records must be of fixed length, and LOB data padded with blanks as necessary. v The striptblanks modifier must be specified, so that the trailing blanks used to pad LOBs can be removed as the LOBs are inserted into the database. Example 3 (Using Dump Files) Table FRIENDS is defined as:
table friends "( c1 INT NOT NULL, c2 INT, c3 CHAR(8) )"
If an attempt is made to load the following data records into this table,
23, 24, bobby , 45, john 4,, mary
the second row is rejected because the first INT is NULL, and the column definition specifies NOT NULL. Columns which contain initial characters that are not consistent with the DEL format will generate an error, and the record will be rejected. Such records can be written to a dump file. DEL data appearing in a column outside of character delimiters is ignored, but does generate a warning. For example:
22,34,"bob" 24,55,"sam" sdf
The utility will load sam in the third column of the table, and the characters sdf will be flagged in a warning. The record is not rejected. Another example:
22 3, 34,"bob"
The utility will load 22,34,"bob", and generate a warning that some data in column one following the 22 was ignored. The record is not rejected. Example 4 (Loading a Table with an Identity Column) TABLE1 has 4 columns: v C1 VARCHAR(30) v C2 INT GENERATED BY DEFAULT AS IDENTITY v C3 DECIMAL(7,2) v C4 CHAR(1)
572
Command Reference
LOAD
TABLE2 is the same as TABLE1, except that C2 is a GENERATED ALWAYS identity column. Data records in DATAFILE1 (DEL format):
"Liszt" "Hummel",,187.43, H "Grieg",100, 66.34, G "Satie",101, 818.23, I
Notes: 1. The following command generates identity values for rows 1 and 2, since no identity values are supplied in DATAFILE1 for those rows. Rows 3 and 4, however, are assigned the user-supplied identity values of 100 and 101, respectively.
db2 load from datafile1.del of del replace into table1
2. To load DATAFILE1 into TABLE1 so that identity values are generated for all rows, issue one of the following commands:
db2 load from datafile1.del of del method P(1, 3, 4) replace into table1 (c1, c3, c4) db2load from datafile1.del of del modified by identityignore replace into table1
3. To load DATAFILE2 into TABLE1 so that identity values are generated for each row, issue one of the following commands:
db2 load from datafile2.del of del replace into table1 (c1, c3, c4) db2 load from datafile2.del of del modified by identitymissing replace into table1
4. To load DATAFILE1 into TABLE2 so that the identity values of 100 and 101 are assigned to rows 3 and 4, issue the following command:
db2 load from datafile1.del of del modified by identityoverride replace into table2
In this case, rows 1 and 2 will be rejected, because the utility has been instructed to override system-generated identity values in favor of user-supplied values. If user-supplied values are not present, however, the row must be rejected, because identity columns are implicitly not NULL. 5. If DATAFILE1 is loaded into TABLE2 without using any of the identity-related file type modifiers, rows 1 and 2 will be loaded, but rows 3 and 4 will be rejected, because they supply their own non-NULL values, and the identity column is GENERATED ALWAYS. Example 5 (Loading using the CURSOR filetype) Table ABC.TABLE1 has 3 columns:
ONE INT TWO CHAR(10) THREE DATE
573
LOAD
Executing the following commands will load all the data from ABC.TABLE1 into ABC.TABLE2:
db2 declare mycurs cursor for select two,one,three from abc.table1 db2 load from mycurs of cursor insert into abc.table2
If ABC.TABLE1 resides in a database different from the database ABC.TABLE2 is in, the DATABASE, USER, and USING options of the DECLARE CURSOR command can be used to perform the load. For example, if ABC.TABLE1 resides in database DB1, and the user ID and password for DB1 are user1 and pwd1 respectively, executing the following commands will load all the data from ABC.TABLE1 into ABC.TABLE2:
db2 declare mycurs cursor database DB1 user user1 using pwd1 for select two,one,three from abc.table1 db2 load from mycurs of cursor insert into abc.table2
Usage notes: v Data is loaded in the sequence that appears in the input file. If a particular sequence is desired, the data should be sorted before a load is attempted. v The load utility builds indexes based on existing definitions. The exception tables are used to handle duplicates on unique keys. The utility does not enforce referential integrity, perform constraints checking, or update materialized query tables that are dependent on the tables being loaded. Tables that include referential or check constraints are placed in Set Integrity Pending state. Summary tables that are defined with REFRESH IMMEDIATE, and that are dependent on tables being loaded, are also placed in Set Integrity Pending state. Issue the SET INTEGRITY statement to take the tables out of Set Integrity Pending state. Load operations cannot be carried out on replicated materialized query tables. v If a clustering index exists on the table, the data should be sorted on the clustering index prior to loading. Data does not need to be sorted prior to loading into a multidimensional clustering (MDC) table, however. v If you specify an exception table when loading into a protected table, any rows that are protected by invalid security labels will be sent to that table. This might allow users that have access to the exception table to access to data that they would not normally be authorized to access. For better security be careful who you grant exception table access to, delete each row as soon as it is repaired and copied to the table being loaded, and drop the exception table as soon as you are done with it. v Security labels in their internal format might contain newline characters. If you load the file using the DEL file format, those newline characters can be mistaken for delimiters. If you have this problem use the older default priority for delimiters by specifying the delprioritychar file type modifier in the LOAD command. v For performing a load using the CURSOR filetype where the DATABASE keyword was specified during the DECLARE CURSOR command, the user ID and password used to authenticate against the database currently connected to (for the load) will be used to authenticate against the source database (specified by the DATABASE option of the DECLARE CURSOR command). If no user ID or password was specified for the connection to the loading database, a user ID and password for the source database must be specified during the DECLARE CURSOR command. Related concepts: v Load overview in Data Movement Utilities Guide and Reference
574
Command Reference
LOAD
v Privileges, authorities, and authorizations required to use Load in Data Movement Utilities Guide and Reference Related tasks: v Loading data in Data Movement Utilities Guide and Reference Related reference: v QUIESCE TABLESPACES FOR TABLE on page 615 v LOAD command using the ADMIN_CMD procedure in Administrative SQL Routines and Views v Load - CLP examples in Data Movement Utilities Guide and Reference v Load configuration options for partitioned database environments in Data Movement Utilities Guide and Reference
575
LOAD QUERY
LOAD QUERY
Checks the status of a load operation during processing and returns the table state. If a load is not processing, then the table state alone is returned. A connection to the same database, and a separate CLP session are also required to successfully invoke this command. It can be used either by local or remote users. Authorization: None Required connection: Database Command syntax:
LOAD QUERY TABLE table-name TO local-message-file NOSUMMARY SUMMARYONLY
SHOWDELTA
Command parameters: NOSUMMARY Specifies that no load summary information (rows read, rows skipped, rows loaded, rows rejected, rows deleted, rows committed, and number of warnings) is to be reported. SHOWDELTA Specifies that only new information (pertaining to load events that have occurred since the last invocation of the LOAD QUERY command) is to be reported. SUMMARYONLY Specifies that only load summary information is to be reported. TABLE table-name Specifies the name of the table into which data is currently being loaded. If an unqualified table name is specified, the table will be qualified with the CURRENT SCHEMA. TO local-message-file Specifies the destination for warning and error messages that occur during the load operation. This file cannot be the message-file specified for the LOAD command. If the file already exists, all messages that the load utility has generated are appended to it. Examples: A user loading a large amount of data into the STAFF table wants to check the status of the load operation. The user can specify:
db2 connect to <database> db2 load query table staff to /u/mydir/staff.tempmsg
576
Command Reference
LOAD QUERY
SQL3501W The table space(s) in which the table resides will not be placed in backup pending state since forward recovery is disabled for the database. SQL3109N The utility is beginning to load data from file "/u/mydir/data/staffbig.del" SQL3500W The utility is beginning the "LOAD" phase at time "03-21-2002 11:31:16.597045". SQL3519W Begin Load Consistency Point. Input record count = "0". SQL3520W Load Consistency Point was successful. SQL3519W Begin Load Consistency Point. Input record count = "104416". SQL3520W Load Consistency Point was successful. SQL3519W Begin Load Consistency Point. Input record count = "205757". SQL3520W Load Consistency Point was successful. SQL3519W Begin Load Consistency Point. Input record count = "307098". SQL3520W Load Consistency Point was successful. SQL3519W Begin Load Consistency Point. Input record count = "408439". SQL3520W Load Consistency Point was successful. SQL3532I The Load utility is currently in the "LOAD" phase. Number Number Number Number Number Number Number of of of of of of of rows read rows skipped rows loaded rows rejected rows deleted rows committed warnings = = = = = = = 453376 0 453376 0 0 408439 0
Usage Notes: In addition to locks, the load utility uses table states to control access to the table. The LOAD QUERY command can be used to determine the table state; LOAD QUERY can be used on tables that are not currently being loaded. For a partitioned table, the state reported is the most restrictive of the corresponding visible data partition states. For example, if a single data partition is in the READ ACCESS state and all other data partitions are in NORMAL state, the load query operation returns the READ ACCESS state. A load operation will not leave a subset of data partitions in a state different from the rest of the table. The table states described by LOAD QUERY are as follows: Normal No table states affect the table. Set Integrity Pending The table has constraints which have not yet been verified. Use the SET INTEGRITY statement to take the table out of Set Integrity Pending state. The load utility places a table in Set Integrity Pending state when it begins a load operation on a table with constraints.
577
LOAD QUERY
Load in Progress There is a load operation in progress on this table. Load Pending A load operation has been active on this table but has been aborted before the data could be committed. Issue a LOAD TERMINATE, LOAD RESTART, or LOAD REPLACE command to bring the table out of this state. Read Access Only The table data is available for read access queries. Load operations using the ALLOW READ ACCESS option place the table in read access only state. Reorg Pending A reorg recommended ALTER TABLE statement has been executed on the table. A classic reorg must be performed before the table is accessable again. Unavailable The table is unavailable. The table can only be dropped or restored from a backup. Rolling forward through a non-recoverable load operation will place a table in the unavailable state. Not Load Restartable The table is in a partially loaded state that will not allow a load restart operation. The table will also be in load pending state. Issue a LOAD TERMINATE or a LOAD REPLACE command to bring the table out of the not load restartable state. A table is placed in not load restartable state when a rollforward operation is performed after a failed load operation that has not been successfully restarted or terminated, or when a restore operation is performed from an online backup that was taken while the table was in load in progress or load pending state. In either case, the information required for a load restart operation is unreliable, and the not load restartable state prevents a load restart operation from taking place. Unknown The LOAD QUERY command is unable determine the table state. The progress of a load operation can also be monitored with the LIST UTILITIES command. Related concepts: v Load overview in Data Movement Utilities Guide and Reference v Monitoring a load operation in a partitioned database environment using the LOAD QUERY command in Data Movement Utilities Guide and Reference v Table locking, table states and table space states in Data Movement Utilities Guide and Reference Related reference: v LIST UTILITIES on page 555
578
Command Reference
MIGRATE DATABASE
MIGRATE DATABASE
Converts previous versions of DB2 databases to the formats corresponding to the release run by the instance. The db2ckmig command must be issued prior to migrating the intance. The db2imigr command implicitly calls the db2ckmig. Backup all databases prior to migration, and prior to the installation of the current version of DB2 database product on Windows operating systems. Authorization: sysadm Required connection: This command establishes a database connection. Command syntax:
MIGRATE DATABASE DB database-alias
Command parameters: DATABASE database-alias Specifies the alias of the database to be migrated to the currently installed version of the database manager. USER username Identifies the user name under which the database is to be migrated. USING password The password used to authenticate the user name. If the password is omitted, but a user name was specified, the user is prompted to enter it. Examples: The following example migrates the database cataloged under the database alias sales:
db2 migrate database sales
Usage notes: This command will only migrate a database to a newer version, and cannot be used to convert a migrated database to its previous version. The database must be cataloged before migration. If an error occurs during migration, it might be necessary to issue the TERMINATE command before attempting the suggested user response. For example, if a log full error occurs during migration (SQL1704: Database migration failed. Reason code 3.), it will be necessary to issue the TERMINATE command before increasing the
Chapter 3. CLP Commands
579
MIGRATE DATABASE
values of the database configuration parameters LOGPRIMARY and LOGFILSIZ. The CLP must refresh its database directory cache if the migration failure occurs after the database has already been relocated (which is likely to be the case when a log full error returns). Related tasks: v Migrating databases in Migration Guide Related reference: v TERMINATE on page 744 v sqlemgdb API - Migrate previous version of DB2 database to current version in Administrative API Reference
580
Command Reference
PING
PING
Tests the network response time of the underlying connectivity between a client and a connected database server. Authorization: None Required connection: Database Command syntax:
REQUEST packet_size PING db_alias RESPONSE packet_size
Command parameters: db_alias Specifies the database alias for the database on a DRDA server that the ping is being sent to. This parameter, although mandatory, is not currently used. It is reserved for future use. Any valid database alias name can be specified. REQUEST packet_size Specifies the size, in bytes, of the packet to be sent to the server. The size must be between 0 and 32767 inclusive. The default is 10 bytes. This option is only valid on servers running DB2 Database for Linux, UNIX, and Windows Version 8 or later, or DB2 UDB for z/OS Version 8 or later. RESPONSE packet_size Specifies the size, in bytes, of the packet to be returned back to client. The size must be between 0 and 32767 inclusive. The default is 10 bytes. This option is only valid on servers running DB2 Database for Linux, UNIX, and Windows Version 8 or later, or DB2 UDB for z/OS Version 8 or later. number_of_times Specifies the number of iterations for this test. The value must be between 1 and 32767 inclusive. The default is 1. One timing will be returned for each iteration. Examples: Example 1 To test the network response time for the connection to the host database hostdb once:
581
PING
Example 2 To test the network response time for the connection to the host database hostdb 5 times:
db2 ping hostdb 5 or db2 ping hostdb 5 times
Example 3 To test the network response time for a connection to the host database hostdb, with a 100-byte request packet and a 200-byte response packet:
db2 ping hostdb request 100 response 200 or db2 ping hostdb request 100 response 200 1 time
Usage notes: A database connection must exist before invoking this command, otherwise an error will result. The elapsed time returned is for the connection between the DB2 client and the DB2 server. This command will not work when it is used from a DB2 Universal Database Version 7 client through a DB2 Connect Version 8 to a connected DB2 host database server. Related reference: v db2DatabasePing API - Ping the database to test network response time in Administrative API Reference
582
Command Reference
PRECOMPILE
PRECOMPILE
Processes an application program source file containing embedded SQL statements. A modified source file is produced, containing host language calls for the SQL statements and, by default, a package is created in the database. Scope: This command can be issued from any database partition in db2nodes.cfg. In a partitioned database environment, it can be issued from any database partition server defined in the db2nodes.cfg file. It updates the database catalogs on the catalog database partition. Its effects are visible to all database partitions. Authorization: One of the following: v sysadm or dbadm authority v BINDADD privilege if a package does not exist, and one of: IMPLICIT_SCHEMA authority on the database if the schema name of the package does not exist CREATEIN privilege on the schema if the schema name of the package exists v ALTERIN privilege on the schema if the package exists v BIND privilege on the package if it exists. The user also needs all privileges required to compile any static SQL statements in the application. Privileges granted to groups are not used for authorization checking of static statements. If the user has sysadm authority, but not explicit privileges to complete the bind, the database manager grants explicit dbadm authority automatically. Required connection: Database. If implicit connect is enabled, a connection to the default database is established. Command syntax: For DB2 for Windows and UNIX
PRECOMPILE PREP filename
ACTION
583
PRECOMPILE
BLOCKING
UNAMBIG ALL NO
COLLECTION
schema-name
CALL_RESOLUTION
IMMEDIATE DEFERRED
CONNECT
1 2
DATETIME
DEFERRED_PREPARE
NO ALL YES
DEGREE
1 degree-of-parallelism ANY
DISCONNECT
DYNAMICRULES
EXPLAIN
EXPLSNAP
FEDERATED
NO YES FUNCPATH
, schema-name
GENERIC
string
INSERT
DEF BUF
ISOLATION
CS RR RS UR
LANGLEVEL
LEVEL
consistency token
MESSAGES
message-file
NOLINEMACRO
OPTLEVEL
0 1
OUTPUT
filename
OWNER
authorization-id
584
Command Reference
PRECOMPILE
PREPROCESSOR
QUALIFIER
qualifier-name
QUERYOPT
optimization-level
SQLCA
SQLRULES
DB2 STD
SQLWARN
NO YES
STATICREADONLY
NO YES
SYNCPOINT
SYNTAX
TARGET
VALIDATE
BIND RUN
WCHARTYPE
NOCONVERT CONVERT
VERSION
version-id AUTO
Notes: 1 NO is the default for 32 bit systems and for 64 bit NT systems where long host variables can be used as declarations for INTEGER columns. YES is the default for 64 bit UNIX systems. SYNTAX is a synonym for SQLERROR(CHECK).
585
PRECOMPILE
ACTION
UNAMBIG ALL NO
CALL_RESOLUTION
IMMEDIATE DEFERRED
CCSIDG
double-ccsid
CCSIDM
mixed-ccsid
CCSIDS
sbcs-ccsid CHARSUB
CNULREQD
YES NO
COLLECTION
schema-name
COMPILE PRECOMPILE
CONNECT
1 2
DBPROTOCOL
DRDA PRIVATE
DEC
15 31
DECDEL
PERIOD COMMA
DEFERRED_PREPARE
NO ALL YES
(2) DEGREE
1 degree-of-parallelism ANY
DISCONNECT
DYNAMICRULES
ENCODING
EXPLAIN
NO YES
GENERIC
string
IMMEDWRITE
NO YES PH1
586
Command Reference
PRECOMPILE
ISOLATION
CS NC RR RS UR
KEEPDYNAMIC
YES NO
LEVEL
consistency-token
(3) LONGERROR
NO YES
MESSAGES
message-file
NOLINEMACRO
OPTHINT
hint-id OPTLEVEL
0 1
OS400NAMING
SYSTEM SQL
OWNER
authorization-id
PREPROCESSOR
QUALIFIER
qualifier-name RELEASE
SQLFLAG
SQLRULES
DB2 STD
SQLERROR
STRDEL
APOSTROPHE QUOTE
SYNCPOINT
SYNTAX TARGET
TEXT
label
VERSION
version-id AUTO
VALIDATE
BIND RUN
WCHARTYPE
NOCONVERT CONVERT
Notes: 1 2 If the server does not support the DATETIME DEF option, it is mapped to DATETIME ISO. The DEGREE option is only supported by DRDA Level 2 Application Servers.
Chapter 3. CLP Commands
587
PRECOMPILE
3 NO is the default for 32 bit systems and for 64 bit NT systems where long host variables can be used as declarations for INTEGER columns. YES is the default for 64 bit UNIX systems.
Command parameters: filename Specifies the source file to be precompiled. An extension of: v .sqc must be specified for C applications (generates a .c file) v .sqx (Windows operating systems), or .sqC (UNIX based systems) must be specified for C++ applications (generates a .cxx file on Windows operating systems, or a .C file on UNIX based systems) v .sqb must be specified for COBOL applications (generates a .cbl file) v .sqf must be specified for FORTRAN applications (generates a .for file on Windows operating systems, or a .f file on UNIX based systems). The preferred extension for C++ applications containing embedded SQL on UNIX based systems is sqC; however, the sqx convention, which was invented for systems that are not case sensitive, is tolerated by UNIX based systems. ACTION Indicates whether the package can be added or replaced. ADD Indicates that the named package does not exist, and that a new package is to be created. If the package already exists, execution stops, and a diagnostic error message is returned.
REPLACE Indicates that the existing package is to be replaced by a new one with the same package name and creator. This is the default value for the ACTION option. RETAIN Indicates whether EXECUTE authorities are to be preserved when a package is replaced. If ownership of the package changes, the new owner grants the BIND and EXECUTE authority to the previous package owner. NO Does not preserve EXECUTE authorities when a package is replaced. This value is not supported by DB2. Preserves EXECUTE authorities when a package is replaced. This is the default value.
YES
REPLVER version-id Replaces a specific version of a package. The version identifier specifies which version of the package is to be replaced. If the specified version does not exist, an error is returned. If the REPLVER option of REPLACE is not specified, and a package already exists that matches the package name and version of the package being precompiled, that package will be replaced; if not, a new package will be added. BINDFILE Results in the creation of a bind file. A package is not created unless the package option is also specified. If a bind file is requested, but no package is to be created, as in the following example:
588
Command Reference
PRECOMPILE
db2 prep sample.sqc bindfile
object existence and authentication SQLCODEs will be treated as warnings instead of errors. This will allow a bind file to be successfully created, even if the database being used for precompilation does not have all of the objects referred to in static SQL statements within the application. The bind file can be successfully bound, creating a package, once the required objects have been created. USING bind-file The name of the bind file that is to be generated by the precompiler. The file name must have an extension of .bnd. If a file name is not entered, the precompiler uses the name of the program (entered as the filename parameter), and adds the .bnd extension. If a path is not provided, the bind file is created in the current directory. BLOCKING Specifies the type of row blocking for cursors. ALL Specifies to block for: v Read-only cursors v Cursors not specified as FOR UPDATE OF Ambiguous cursors are treated as read-only. NO Specifies not to block any cursors. Ambiguous cursors are treated as updatable.
UNAMBIG Specifies to block for: v Read-only cursors v Cursors not specified as FOR UPDATE OF Ambiguous cursors are treated as updatable. CALL_RESOLUTION If set, the CALL_RESOLUTION DEFERRED option indicates that the CALL statement will be executed as an invocation of the deprecated sqleproc() API. If not set or if IMMEDIATE is set, the CALL statement will be executed as a normal SQL statement. SQL0204 will be issued if the precompiler fails to resolve the procedure on a CALL statement with CALL_RESOLUTION IMMEDIATE. CCSIDG double-ccsid An integer specifying the coded character set identifier (CCSID) to be used for double byte characters in character column definitions (without a specific CCSID clause) in CREATE and ALTER TABLE SQL statements. This option is not supported by DB2 Database for Linux, UNIX, and Windows. The DRDA server will use a system defined default value if this option is not specified. CCSIDM mixed-ccsid An integer specifying the coded character set identifier (CCSID) to be used for mixed byte characters in character column definitions (without a specific CCSID clause) in CREATE and ALTER TABLE SQL statements. This option is not supported by DB2 Database for Linux, UNIX, and Windows. The DRDA server will use a system defined default value if this option is not specified.
589
PRECOMPILE
CCSIDS sbcs-ccsid An integer specifying the coded character set identifier (CCSID) to be used for single byte characters in character column definitions (without a specific CCSID clause) in CREATE and ALTER TABLE SQL statements. This option is not supported by DB2 Database for Linux, UNIX, and Windows. The DRDA server will use a system defined default value if this option is not specified. CHARSUB Designates the default character sub-type that is to be used for column definitions in CREATE and ALTER TABLE SQL statements. This DRDA precompile/bind option is not supported by DB2. BIT Use the FOR BIT DATA SQL character sub-type in all new character columns for which an explicit sub-type is not specified.
DEFAULT Use the target system defined default in all new character columns for which an explicit sub-type is not specified. MIXED Use the FOR MIXED DATA SQL character sub-type in all new character columns for which an explicit sub-type is not specified. SBCS Use the FOR SBCS DATA SQL character sub-type in all new character columns for which an explicit sub-type is not specified. CNULREQD This option is related to the langlevel precompile option, which is not supported by DRDA. It is valid only if the bind file is created from a C or a C++ application. This DRDA bind option is not supported by DB2. NO The application was coded on the basis of the langlevel SAA1 precompile option with respect to the null terminator in C string host variables. The application was coded on the basis of the langlevel MIA precompile option with respect to the null terminator in C string host variables.
YES
COLLECTION schema-name Specifies a 30-character collection identifier for the package. If not specified, the authorization identifier for the user processing the package is used. CONNECT 1 2 Specifies that a CONNECT statement is to be processed as a type 1 CONNECT. Specifies that a CONNECT statement is to be processed as a type 2 CONNECT.
DATETIME Specifies the date and time format to be used. DEF EUR ISO Use a date and time format associated with the territory code of the database. Use the IBM standard for Europe date and time format. Use the date and time format of the International Standards Organization.
590
Command Reference
PRECOMPILE
JIS LOC USA Use the date and time format of the Japanese Industrial Standard. Use the date and time format in local form associated with the territory code of the database. Use the IBM standard for U.S. date and time format.
DBPROTOCOL Specifies what protocol to use when connecting to a remote site that is identified by a three-part name statement. Supported by DB2 for OS/390 only. For a list of supported option values, refer to the documentation for DB2 for OS/390. DEC Specifies the maximum precision to be used in decimal arithmetic operations. This DRDA precompile/bind option is not supported by DB2. The DRDA server will use a system defined default value if this option is not specified. 15 31 15-digit precision is used in decimal arithmetic operations. 31-digit precision is used in decimal arithmetic operations.
DECDEL Designates whether a period (.) or a comma (,) will be used as the decimal point indicator in decimal and floating point literals. This DRDA precompile/bind option is not supported by DB2. The DRDA server will use a system defined default value if this option is not specified. COMMA Use a comma (,) as the decimal point indicator. PERIOD Use a period (.) as the decimal point indicator. DEFERRED_PREPARE Provides a performance enhancement when accessing DB2 common server databases or DRDA databases. This option combines the SQL PREPARE statement flow with the associated OPEN, DESCRIBE, or EXECUTE statement flow to minimize inter-process or network flow. NO YES The PREPARE statement will be executed at the time it is issued. Execution of the PREPARE statement will be deferred until the corresponding OPEN, DESCRIBE, or EXECUTE statement is issued. The PREPARE statement will not be deferred if it uses the INTO clause, which requires an SQLDA to be returned immediately. However, if the PREPARE INTO statement is issued for a cursor that does not use any parameter markers, the processing will be optimized by pre-OPENing the cursor when the PREPARE is executed. ALL Same as YES, except that a PREPARE INTO statement is also deferred. If the PREPARE statement uses the INTO clause to return an SQLDA, the application must not reference the content of this SQLDA until the OPEN, DESCRIBE, or EXECUTE statement is issued and returned.
DEGREE Specifies the degree of parallelism for the execution of static SQL statements in an SMP system. This option does not affect CREATE INDEX parallelism.
Chapter 3. CLP Commands
591
PRECOMPILE
1 The execution of the statement will not use parallelism.
degree-of-parallelism Specifies the degree of parallelism with which the statement can be executed, a value between 2 and 32 767 (inclusive). ANY DISCONNECT AUTOMATIC Specifies that all database connections are to be disconnected at commit. CONDITIONAL Specifies that the database connections that have been marked RELEASE or have no open WITH HOLD cursors are to be disconnected at commit. EXPLICIT Specifies that only database connections that have been explicitly marked for release by the RELEASE statement are to be disconnected at commit. DYNAMICRULES Defines which rules apply to dynamic SQL at run time for the initial setting of the values used for authorization ID and for the implicit qualification of unqualified object references. RUN Specifies that the authorization ID of the user executing the package is to be used for authorization checking of dynamic SQL statements. The authorization ID will also be used as the default package qualifier for implicit qualification of unqualified object references within dynamic SQL statements. This is the default value. Specifies that the execution of the statement can involve parallelism using a degree determined by the database manager.
BIND Specifies that all of the rules that apply to static SQL for authorization and qualification are to be used at run time. That is, the authorization ID of the package owner is to be used for authorization checking of dynamic SQL statements, and the default package qualifier is to be used for implicit qualification of unqualified object references within dynamic SQL statements. DEFINERUN If the package is used within a routine context, the authorization ID of the routine definer is to be used for authorization checking and for implicit qualification of unqualified object references within dynamic SQL statements within the routine. If the package is used as a standalone application, dynamic SQL statements are processed as if the package were bound with DYNAMICRULES RUN. DEFINEBIND If the package is used within a routine context, the authorization ID of the routine definer is to be used for authorization checking and for implicit qualification of unqualified object references within dynamic SQL statements within the routine.
592
Command Reference
PRECOMPILE
If the package is used as a standalone application, dynamic SQL statements are processed as if the package were bound with DYNAMICRULES BIND. INVOKERUN If the package is used within a routine context, the current statement authorization ID in effect when the routine is invoked is to be used for authorization checking of dynamic SQL statements and for implicit qualification of unqualified object references within dynamic SQL statements within that routine. If the package is used as a standalone application, dynamic SQL statements are processed as if the package were bound with DYNAMICRULES RUN. INVOKEBIND If the package is used within a routine context, the current statement authorization ID in effect when the routine is invoked is to be used for authorization checking of dynamic SQL statements and for implicit qualification of unqualified object references within dynamic SQL statements within that routine. If the package is used as a standalone application, dynamic SQL statements are processed as if the package were bound with DYNAMICRULES BIND. Because dynamic SQL statements will be using the authorization ID of the package owner in a package exhibiting bind behavior, the binder of the package should not have any authorities granted to them that the user of the package should not receive. Similarly, when defining a routine that will exhibit define behavior, the definer of the routine should not have any authorities granted to them that the user of the package should not receive since a dynamic statement will be using the authorization ID of the routines definer. The following dynamically prepared SQL statements cannot be used within a package that was not bound with DYNAMICRULES RUN: GRANT, REVOKE, ALTER, CREATE, DROP, COMMENT ON, RENAME, SET INTEGRITY, and SET EVENT MONITOR STATE. ENCODING Specifies the encoding for all host variables in static statements in the plan or package. Supported by DB2 for OS/390 only. For a list of supported option values, refer to the documentation for DB2 for OS/390. EXPLAIN Stores information in the Explain tables about the access plans chosen for each SQL statement in the package. DRDA does not support the ALL value for this option. NO YES Explain information will not be captured. Explain tables will be populated with information about the chosen access plan at prep/bind time for static statements and at run time for incremental bind statements. If the package is to be used for a routine and the package contains incremental bind statements, then the routine must be defined as MODIFIES SQL DATA. If this is not done, incremental bind statements in the package will cause a run time error (SQLSTATE 42985).
Chapter 3. CLP Commands
593
PRECOMPILE
REOPT Explain information for each reoptimizable incremental bind SQL statement will be placed in the Explain tables at run time. In addition, Explain information will be gathered for reoptimizable dynamic SQL statements at run time, even if the CURRENT EXPLAIN MODE special register is set to NO. If the package is to be used for a routine, then the routine must be defined as MODIFIES SQL DATA, otherwise incremental bind and dynamic statements in the package will cause a run time error (SQLSTATE 42985). ALL Explain information for each eligible static SQL statement will be placed in the Explain tables at prep/bind time. Explain information for each eligible incremental bind SQL statement will be placed in the Explain tables at run time. In addition, Explain information will be gathered for eligible dynamic SQL statements at run time, even if the CURRENT EXPLAIN MODE special register is set to NO. If the package is to be used for a routine, then the routine must be defined as MODIFIES SQL DATA, otherwise incremental bind and dynamic statements in the package will cause a run time error (SQLSTATE 42985). EXPLSNAP Stores Explain Snapshot information in the Explain tables. This DB2 precompile/bind option is not supported by DRDA. NO YES An Explain Snapshot will not be captured. An Explain Snapshot for each eligible static SQL statement will be placed in the Explain tables at prep/bind time for static statements and at run time for incremental bind statements. If the package is to be used for a routine and the package contains incremental bind statements, then the routine must be defined as MODIFIES SQL DATA or incremental bind statements in the package will cause a run time error (SQLSTATE 42985). REOPT Explain Snapshot information for each reoptimizable incremental bind SQL statement will be placed in the Explain tables at run time. In addition, Explain Snapshot information will be gathered for reoptimizable dynamic SQL statements at run time, even if the CURRENT EXPLAIN SNAPSHOT special register is set to NO. If the package is to be used for a routine, then the routine must be defined as MODIFIES SQL DATA, otherwise incremental bind and dynamic statements in the package will cause a run time error (SQLSTATE 42985). ALL An Explain Snapshot for each eligible static SQL statement will be placed in the Explain tables at prep/bind time. Explain Snapshot information for each eligible incremental bind SQL statement will be placed in the Explain tables at run time. In addition, Explain Snapshot information will be gathered for eligible dynamic SQL statements at run time, even if the CURRENT EXPLAIN SNAPSHOT special register is set to NO.
594
Command Reference
PRECOMPILE
If the package is to be used for a routine, then the routine must be defined as MODIFIES SQL DATA, or incremental bind and dynamic statements in the package will cause a run time error (SQLSTATE 42985). FEDERATED Specifies whether a static SQL statement in a package references a nickname or a federated view. If this option is not specified and a static SQL statement in the package references a nickname or a federated view, a warning is returned and the package is created. This option is not supported by DRDA servers. NO A nickname or federated view is not referenced in the static SQL statements of the package. If a nickname or federated view is encountered in a static SQL statement during the prepare or bind phase of this package, an error is returned and the package is not created. A nickname or federated view can be referenced in the static SQL statements of the package. If no nicknames or federated views are encountered in static SQL statements during the prepare or bind of the package, no errors or warnings are returned and the package is created.
YES
FUNCPATH Specifies the function path to be used in resolving user-defined distinct types and functions in static SQL. If this option is not specified, the default function path is SYSIBM,SYSFUN,USER where USER is the value of the USER special register. This DB2 precompile/bind option is not supported by DRDA. schema-name An SQL identifier, either ordinary or delimited, which identifies a schema that exists at the application server. No validation that the schema exists is made at precompile or at bind time. The same schema cannot appear more than once in the function path. The number of schemas that can be specified is limited by the length of the resulting function path, which cannot exceed 254 bytes. The schema SYSIBM does not need to be explicitly specified; it is implicitly assumed to be the first schema if it is not included in the function path. INSERT Allows a program being precompiled or bound against a DB2 Enterprise Server Edition server to request that data inserts be buffered to increase performance. BUF DEF Specifies that inserts from an application should be buffered. Specifies that inserts from an application should not be buffered.
GENERIC string Supports the binding of new options that are defined in the target database, but are not supported by DRDA. Do not use this option to pass bind options that are defined in BIND or PRECOMPILE. This option can substantially improve dynamic SQL performance. The syntax is as follows:
generic "option1 value1 option2 value2 ..."
595
PRECOMPILE
Each option and value must be separated by one or more blank spaces. For example, if the target DRDA database is DB2 Universal Database, Version 8, one could use:
generic "explsnap all queryopt 3 federated yes"
to bind each of the EXPLSNAP, QUERYOPT, and FEDERATED options. The maximum length of the string is 1023 bytes. IMMEDWRITE Indicates whether immediate writes will be done for updates made to group buffer pool dependent pagesets or database partitions. Supported by DB2 for OS/390 only. For a list of supported option values, refer to the documentation for DB2 for OS/390. ISOLATION Determines how far a program bound to this package can be isolated from the effect of other executing programs. CS NC RR RS Specifies Cursor Stability as the isolation level. No Commit. Specifies that commitment control is not to be used. This isolation level is not supported by DB2. Specifies Repeatable Read as the isolation level. Specifies Read Stability as the isolation level. Read Stability ensures that the execution of SQL statements in the package is isolated from other application processes for rows read and changed by the application. Specifies Uncommitted Read as the isolation level.
UR
LANGLEVEL Specifies the SQL rules that apply for both the syntax and the semantics for both static and dynamic SQL in the application. This option is not supported by DRDA servers. MIA Select the ISO/ANS SQL92 rules as follows: v To support error SQLCODE or SQLSTATE checking, an SQLCA must be declared in the application code. v C null-terminated strings are padded with blanks and always include a null-terminating character, even if truncation occurs. v The FOR UPDATE clause is optional for all columns to be updated in a positioned UPDATE. v A searched UPDATE or DELETE requires SELECT privilege on the object table of the UPDATE or DELETE statement if a column of the object table is referenced in the search condition or on the right hand side of the assignment clause. v A column function that can be resolved using an index (for example MIN or MAX) will also check for nulls and return warning SQLSTATE 01003 if there were any nulls. v An error is returned when a duplicate unique constraint is included in a CREATE or ALTER TABLE statement. v An error is returned when no privilege is granted and the grantor has no privileges on the object (otherwise a warning is returned).
596
Command Reference
PRECOMPILE
v To support error SQLCODE or SQLSTATE checking, an SQLCA must be declared in the application code. v C null-terminated strings are not terminated with a null character if truncation occurs. v The FOR UPDATE clause is required for all columns to be updated in a positioned UPDATE. v A searched UPDATE or DELETE will not require SELECT privilege on the object table of the UPDATE or DELETE statement unless a fullselect in the statement references the object table. v A column function that can be resolved using an index (for example MIN or MAX) will not check for nulls and warning SQLSTATE 01003 is not returned. v A warning is returned and the duplicate unique constraint is ignored. v An error is returned when no privilege is granted. SQL92E Defines the ISO/ANS SQL92 rules as follows: v To support checking of SQLCODE or SQLSTATE values, variables by this name can be declared in the host variable declare section (if neither is declared, SQLCODE is assumed during precompilation). v C null-terminated strings are padded with blanks and always include a null-terminating character, even if truncation occurs. v The FOR UPDATE clause is optional for all columns to be updated in a positioned UPDATE. v A searched UPDATE or DELETE requires SELECT privilege on the object table of the UPDATE or DELETE statement if a column of the object table is referenced in the search condition or on the right hand side of the assignment clause. v A column function that can be resolved using an index (for example MIN or MAX) will also check for nulls and return warning SQLSTATE 01003 if there were any nulls. v An error is returned when a duplicate unique constraint is included in a CREATE or ALTER TABLE statement. v An error is returned when no privilege is granted and the grantor has no privileges on the object (otherwise a warning is returned). KEEPDYNAMIC Specifies whether dynamic SQL statements are to be kept after commit points. Supported by DB2 for OS/390 only. For a list of supported option values, refer to the documentation for DB2 for OS/390. LEVEL consistency-token Defines the level of a module using the consistency token. The consistency token is any alphanumeric value up to 8 characters in length. The RDB package consistency token verifies that the requesters application and the relational database package are synchronized. This option is not recommended for general use. LONGERROR Indicates whether long host variable declarations will be treated as an
Chapter 3. CLP Commands
597
PRECOMPILE
error. For portability, sqlint32 can be used as a declaration for an INTEGER column in precompiled C and C++ code. NO Does not generate errors for the use of long host variable declarations. This is the default for 32 bit systems and for 64 bit NT systems where long host variables can be used as declarations for INTEGER columns. The use of this option on 64 bit UNIX platforms will allow long host variables to be used as declarations for BIGINT columns. Generates errors for the use of long host variable declarations. This is the default for 64 bit UNIX systems.
YES
MESSAGES message-file Specifies the destination for warning, error, and completion status messages. A message file is created whether the bind is successful or not. If a message file name is not specified, the messages are written to standard output. If the complete path to the file is not specified, the current directory is used. If the name of an existing file is specified, the contents of the file are overwritten. NOLINEMACRO Suppresses the generation of the #line macros in the output .c file. Useful when the file is used with development tools which require source line information such as profiles, cross-reference utilities, and debuggers. This precompile option is used for the C/C++ programming languages only. OPTHINT Controls whether query optimization hints are used for static SQL. Supported by DB2 for OS/390 only. For a list of supported option values, refer to the documentation for DB2 for OS/390. OPTLEVEL Indicates whether the C/C++ precompiler is to optimize initialization of internal SQLDAs when host variables are used in SQL statements. Such optimization can increase performance when a single SQL statement (such as FETCH) is used inside a tight loop. 0 1 Instructs the precompiler not to optimize SQLDA initialization. Instructs the precompiler to optimize SQLDA initialization. This value should not be specified if the application uses: v pointer host variables, as in the following example:
exec sql begin declare section; char (*name)[20]; short *id; exec sql end declare section;
v C++ data members directly in SQL statements. OUTPUT filename Overrides the default name of the modified source file produced by the compiler. It can include a path. OS400NAMING Specifies which naming option is to be used when accessing DB2 UDB for iSeries data. Supported by DB2 UDB for iSeries only. For a list of supported option values, refer to the documentation for DB2 for iSeries. Because of the slashes used as separators, a DB2 utility can still report a syntax error at execution time on certain SQL statements which use the iSeries system naming convention, even though the utility might have been
598
Command Reference
PRECOMPILE
precompiled or bound with the OS400NAMING SYSTEM option. For example, the Command Line Processor will report a syntax error on an SQL CALL statement if the iSeries system naming convention is used, whether or not it has been precompiled or bound using the OS400NAMING SYSTEM option. OWNER authorization-id Designates a 30-character authorization identifier for the package owner. The owner must have the privileges required to execute the SQL statements contained in the package. Only a user with SYSADM or DBADM authority can specify an authorization identifier other than the user ID. The default value is the primary authorization ID of the precompile/bind process. SYSIBM, SYSCAT, and SYSSTAT are not valid values for this option. PACKAGE Creates a package. If neither package, bindfile, nor syntax is specified, a package is created in the database by default. USING package-name The name of the package that is to be generated by the precompiler. If a name is not entered, the name of the application program source file (minus extension and folded to uppercase) is used. Maximum length is 8 characters. PREPROCESSOR preprocessor-command Specifies the preprocessor command that can be executed by the precompiler before it processes embedded SQL statements. The preprocessor command string (maximum length 1024 bytes) must be enclosed either by double or by single quotation marks. This option enables the use of macros within the declare section. A valid preprocessor command is one that can be issued from the command line to invoke the preprocessor without specifying a source file. For example,
xlc -P -DMYMACRO=0
QUALIFIER qualifier-name Provides an 30-character implicit qualifier for unqualified objects contained in the package. The default is the owners authorization ID, whether or not owner is explicitly specified. QUERYOPT optimization-level Indicates the desired level of optimization for all static SQL statements contained in the package. The default value is 5. The SET CURRENT QUERY OPTIMIZATION statement describes the complete range of optimization levels available. This DB2 precompile/bind option is not supported by DRDA. RELEASE Indicates whether resources are released at each COMMIT point, or when the application terminates. This DRDA precompile/bind option is not supported by DB2. COMMIT Release resources at each COMMIT point. Used for dynamic SQL statements. DEALLOCATE Release resources only when the application terminates.
599
PRECOMPILE
REOPT Specifies whether to have DB2 optimize an access path using values for host variables, parameter markers, and special registers. Valid values are: NONE The access path for a given SQL statement containing host variables, parameter markers or special registers will not be optimized using real values for these variables. The default estimates for the these variables will be used instead, and this plan is cached and used subsequently. This is the default behavior. ONCE The access path for a given SQL statement will be optimized using the real values of the host variables, parameter markers or special registers when the query is first executed. This plan is cached and used subsequently. ALWAYS The access path for a given SQL statement will always be compiled and reoptimized using the values of the host variables, parameter markers or special registers known at each execution time. REOPT / NOREOPT VARS These options have been replaced by REOPT ALWAYS and REOPT NONE; however, they are still supported for compatibility with previous releases. Specifies whether to have DB2 determine an access path at run time using values for host variables, parameter markers, and special registers. Supported by DB2 for OS/390 only. For a list of supported option values, refer to the documentation for DB2 for OS/390. SQLCA For FORTRAN applications only. This option is ignored if it is used with other languages. NONE Specifies that the modified source code is not consistent with the SAA definition. SAA Specifies that the modified source code is consistent with the SAA definition.
SQLERROR Indicates whether to create a package or a bind file if an error is encountered. CHECK Specifies that the target system performs all syntax and semantic checks on the SQL statements being bound. A package will not be created as part of this process. If, while binding, an existing package with the same name and version is encountered, the existing package is neither dropped nor replaced even if action replace was specified. CONTINUE Creates a package, even if errors occur when binding SQL statements. Those statements that failed to bind for authorization or existence reasons can be incrementally bound at execution time if VALIDATE RUN is also specified. Any attempt to execute them at run time generates an error (SQLCODE -525, SQLSTATE 51015). NOPACKAGE A package or a bind file is not created if an error is encountered.
600
Command Reference
PRECOMPILE
SQLFLAG Identifies and reports on deviations from the SQL language syntax specified in this option. A bind file or a package is created only if the bindfile or the package option is specified, in addition to the sqlflag option. Local syntax checking is performed only if one of the following options is specified: v bindfile v package v sqlerror check v syntax If sqlflag is not specified, the flagger function is not invoked, and the bind file or the package is not affected. SQL92E SYNTAX The SQL statements will be checked against ANSI or ISO SQL92 Entry level SQL language format and syntax with the exception of syntax rules that would require access to the database catalog. Any deviation is reported in the precompiler listing. MVSDB2V23 SYNTAX The SQL statements will be checked against MVS DB2 Version 2.3 SQL language syntax. Any deviation from the syntax is reported in the precompiler listing. MVSDB2V31 SYNTAX The SQL statements will be checked against MVS DB2 Version 3.1 SQL language syntax. Any deviation from the syntax is reported in the precompiler listing. MVSDB2V41 SYNTAX The SQL statements will be checked against MVS DB2 Version 4.1 SQL language syntax. Any deviation from the syntax is reported in the precompiler listing. SORTSEQ Specifies which sort sequence table to use on the iSeries system. Supported by DB2 UDB for iSeries only. For a list of supported option values, refer to the documentation for DB2 for iSeries. SQLRULES Specifies: v Whether type 2 CONNECTs are to be processed according to the DB2 rules or the Standard (STD) rules based on ISO/ANS SQL92. v How a user or application can specify the format of LOB answer set columns. DB2 v Permits the SQL CONNECT statement to switch the current connection to another established (dormant) connection. v The user or application can specify the format of a LOB column only during the first fetch request. STD
601
PRECOMPILE
v Permits the SQL CONNECT statement to establish a new connection only. The SQL SET CONNECTION statement must be used to switch to a dormant connection. v The user or application can change the format of a LOB column with each fetch request. SQLWARN Indicates whether warnings will be returned from the compilation of dynamic SQL statements (via PREPARE or EXECUTE IMMEDIATE), or from describe processing (via PREPARE...INTO or DESCRIBE). NO YES Warnings will not be returned from the SQL compiler. Warnings will be returned from the SQL compiler.
SQLCODE +238 is an exception. It is returned regardless of the sqlwarn option value. STATICREADONLY Determines whether static cursors will be treated as being READ ONLY. This DB2 precompile/bind option is not supported by DRDA. NO All static cursors will take on the attributes as would normally be generated given the statement text and the setting of the LANGLEVEL precompile option. This is the default value. Any static cursor that does not contain the FOR UPDATE or FOR READ ONLY clause will be considered READ ONLY.
YES
STRDEL Designates whether an apostrophe () or double quotation marks (") will be used as the string delimiter within SQL statements. This DRDA precompile/bind option is not supported by DB2. The DRDA server will use a system defined default value if this option is not specified. APOSTROPHE Use an apostrophe () as the string delimiter. QUOTE Use double quotation marks (") as the string delimiter. SYNCPOINT Specifies how commits or rollbacks are to be coordinated among multiple database connections. This command parameter is ignored and is only included here for backward compatibility. NONE Specifies that no Transaction Manager (TM) is to be used to perform a two-phase commit, and does not enforce single updater, multiple reader. A COMMIT is sent to each participating database. The application is responsible for recovery if any of the commits fail. ONEPHASE Specifies that no TM is to be used to perform a two-phase commit. A one-phase commit is to be used to commit the work done by each database in multiple database transactions. TWOPHASE Specifies that the TM is required to coordinate two-phase commits among those databases that support this protocol.
602
Command Reference
PRECOMPILE
SYNTAX Suppresses the creation of a package or a bind file during precompilation. This option can be used to check the validity of the source file without modifying or altering existing packages or bind files. Syntax is a synonym for sqlerror check. If syntax is used together with the package option, package is ignored. TARGET Instructs the precompiler to produce modified code tailored to one of the supported compilers on the current platform. IBMCOB On AIX, code is generated for the IBM COBOL Set for AIX compiler. MFCOB Code is generated for the Micro Focus COBOL compiler. This is the default if a target value is not specified with the COBOL precompiler on all UNIX operating systems and Windows. ANSI_COBOL Code compatible with the ANS X3.23-1985 standard is generated. C Code compatible with the C compilers supported by DB2 on the current platform is generated.
CPLUSPLUS Code compatible with the C++ compilers supported by DB2 on the current platform is generated. FORTRAN Code compatible with the FORTRAN compilers supported by DB2 on the current platform is generated. TEXT label The description of a package. Maximum length is 255 characters. The default value is blanks. This DRDA precompile/bind option is not supported by DB2. TRANSFORM GROUP Specifies the transform group name to be used by static SQL statements for exchanging user-defined structured type values with host programs. This transform group is not used for dynamic SQL statements or for the exchange of parameters and results with external functions or methods. This option is not supported by DRDA servers. groupname An SQL identifier of up to 18 characters in length. A group name cannot include a qualifier prefix and cannot begin with the prefix SYS since this is reserved for database use. In a static SQL statement that interacts with host variables, the name of the transform group to be used for exchanging values of a structured type is as follows: v The group name in the TRANSFORM GROUP bind option, if any v The group name in the TRANSFORM GROUP prep option as specified at the original precompilation time, if any v The DB2_PROGRAM group, if a transform exists for the given type whose group name is DB2_PROGRAM
Chapter 3. CLP Commands
603
PRECOMPILE
v No transform group is used if none of the above conditions exist. The following errors are possible during the bind of a static SQL statement: v SQLCODE yyy, SQLSTATE xxxxx: A transform is needed, but no static transform group has been selected. v SQLCODE yyy, SQLSTATE xxxxx: The selected transform group does not include a necessary transform (TO SQL for input variables, FROM SQL for output variables) for the data type that needs to be exchanged. v SQLCODE yyy, SQLSTATE xxxxx: The result type of the FROM SQL transform is not compatible with the type of the output variable, or the parameter type of the TO SQL transform is not compatible with the type of the input variable. In these error messages, yyyyy is replaced by the SQL error code, and xxxxx by the SQL state code. VALIDATE Determines when the database manager checks for authorization errors and object not found errors. The package owner authorization ID is used for validity checking. BIND Validation is performed at precompile/bind time. If all objects do not exist, or all authority is not held, error messages are produced. If sqlerror continue is specified, a package/bind file is produced despite the error message, but the statements in error are not executable. RUN Validation is attempted at bind time. If all objects exist, and all authority is held, no further checking is performed at execution time. If all objects do not exist, or all authority is not held at precompile/bind time, warning messages are produced, and the package is successfully bound, regardless of the sqlerror continue option setting. However, authority checking and existence checking for SQL statements that failed these checks during the precompile/bind process can be redone at execution time. VERSION Defines the version identifier for a package. If this option is not specified, the package version will be (the empty string). version-id Specifies a version identifier that is any alphanumeric value, $, #, @, _, -, or ., up to 64 characters in length. AUTO The version identifier will be generated from the consistency token. If the consistency token is a timestamp (it will be if the LEVEL option is not specified), the timestamp is converted into ISO character format and is used as the version identifier. WCHARTYPE Specifies the format for graphic data. CONVERT Host variables declared using the wchar_t base type will be treated as containing data in wchar_t format. Since this format is not
604
Command Reference
PRECOMPILE
directly compatible with the format of graphic data stored in the database (DBCS format), input data in wchar_t host variables is implicitly converted to DBCS format on behalf of the application, using the ANSI C function wcstombs(). Similarly, output DBCS data is implicitly converted to wchar_t format, using mbstowcs(), before being stored in host variables. NOCONVERT Host variables declared using the wchar_t base type will be treated as containing data in DBCS format. This is the format used within the database for graphic data; it is, however, different from the native wchar_t format implemented in the C language. Using NOCONVERT means that graphic data will not undergo conversion between the application and the database, which can improve efficiency. The application is, however, responsible for ensuring that data in wchar_t format is not passed to the database manager. When this option is used, wchar_t host variables should not be manipulated with the C wide character string functions, and should not be initialized with wide character literals (L-literals). Usage notes: A modified source file is produced, which contains host language equivalents to the SQL statements. By default, a package is created in the database to which a connection has been established. The name of the package is the same as the file name (minus the extension and folded to uppercase), up to a maximum of 8 characters. Following connection to a database, PREP executes under the transaction that was started. PREP then issues a COMMIT or a ROLLBACK to terminate the current transaction and start another one. Creating a package with a schema name that does not already exist results in the implicit creation of that schema. The schema owner is SYSIBM. The CREATEIN privilege on the schema is granted to PUBLIC. During precompilation, an Explain Snapshot is not taken unless a package is created and explsnap has been specified. The snapshot is put into the Explain tables of the user creating the package. Similarly, Explain table information is only captured when explain is specified, and a package is created. Precompiling stops if a fatal error or more than 100 errors occur. If a fatal error occurs, the utility stops precompiling, attempts to close all files, and discards the package. When a package exhibits bind behavior, the following will be true: 1. The implicit or explicit value of the BIND option OWNER will be used for authorization checking of dynamic SQL statements. 2. The implicit or explicit value of the BIND option QUALIFIER will be used as the implicit qualifier for qualification of unqualified objects within dynamic SQL statements. 3. The value of the special register CURRENT SCHEMA has no effect on qualification.
605
PRECOMPILE
In the event that multiple packages are referenced during a single connection, all dynamic SQL statements prepared by those packages will exhibit the behavior as specified by the DYNAMICRULES option for that specific package and the environment they are used in. If an SQL statement was found to be in error and the PRECOMPILE option SQLERROR CONTINUE was specified, the statement will be marked as invalid and another PRECOMPILE must be issued in order to change the state of the SQL statement. Implicit and explicit rebind will not change the state of an invalid statement in a package bound with VALIDATE RUN. A statement can change from static to incremental bind or incremental bind to static across implicit and explicit rebinds depending on whether or not object existence or authority problems exist during the rebind. Binding a package with REOPT ONCE or REOPT ALWAYS might change static and dynamic statement compilation and performance. Related concepts: v Authorization considerations for dynamic SQL in Developing SQL and External Routines v Effect of DYNAMICRULES bind option on dynamic SQL in Developing Embedded SQL Applications v Performance improvements when using REOPT option of the BIND command in Developing Embedded SQL Applications v WCHARTYPE precompiler option for graphic data in C and C++ embedded SQL applications in Developing Embedded SQL Applications Related tasks: v Specifying row blocking to reduce overhead in Performance Guide Related reference: v BIND on page 355 v Datetime values in SQL Reference, Volume 1 v SET CURRENT QUERY OPTIMIZATION statement in SQL Reference, Volume 2 v sqlaprep API - Precompile application program in Administrative API Reference
606
Command Reference
PRUNE HISTORY/LOGFILE
PRUNE HISTORY/LOGFILE
Used to delete entries from the recovery history file or to delete log files from the active log file path. Deleting entries from the recovery history file might be necessary if the file becomes excessively large and the retention period is high. Authorization: One of the following: v sysadm v sysctrl v sysmaint v dbadm Required connection: Database Command syntax:
PRUNE HISTORY timestamp LOGFILE PRIOR TO WITH FORCE OPTION log-file-name AND DELETE
Command parameters: HISTORY timestamp Identifies a range of entries in the recovery history file that will be deleted. A complete time stamp (in the form yyyymmddhhmmss), or an initial prefix (minimum yyyy) can be specified. All entries with time stamps equal to or less than the time stamp provided are deleted from the recovery history file. WITH FORCE OPTION Specifies that the entries will be pruned according to the time stamp specified, even if some entries from the most recent restore set are deleted from the file. A restore set is the most recent full database backup including any restores of that backup image. If this parameter is not specified, all entries from the backup image forward will be maintained in the history. AND DELETE Specifies that the associated log archives will be physically deleted (based on the location information) when the history file entry is removed. This option is especially useful for ensuring that archive storage space is recovered when log archives are no longer needed. If you are archiving logs via a user exit program, the logs cannot be deleted using this option. LOGFILE PRIOR TO log-file-name Specifies a string for a log file name, for example S0000100.LOG. All log files prior to (but not including) the specified log file will be deleted. The LOGRETAIN database configuration parameter must be set to RECOVERY or CAPTURE. Examples:
607
PRUNE HISTORY/LOGFILE
To remove the entries for all restores, loads, table space backups, and full database backups taken before and including December 1, 1994 from the recovery history file, enter:
db2 prune history 199412
199412 is interpreted as 19941201000000. Usage notes: If the FORCE option is used, you might delete entries that are required for automatic restoration of databases. Manual restores will still work correctly. Use of this command can also prevent the dbckrst utility from being able to correctly analyze the complete chain of required backup images. Using the PRUNE HISTORY command without the FORCE option prevents required entries from being deleted. Pruning backup entries from the history file causes related file backups on DB2 Data Links Manager servers to be deleted. Related concepts: v Developing a backup and recovery strategy in Data Recovery and High Availability Guide and Reference Related reference: v PRUNE HISTORY/LOGFILE command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
608
Command Reference
PUT ROUTINE
PUT ROUTINE
Uses the specified routine SQL Archive (SAR) file to define a routine in the database. Authorization: dbadm Required connection: Database. If implicit connect is enabled, a connection to the default database is established. Command syntax:
PUT ROUTINE FROM file-name OWNER new-owner USE REGISTERS
Command parameters: FROM file-name Names the file where routine SQL archive (SAR) is stored. OWNER new-owner Specifies a new authorization name that will be used for authorization checking of the routine. The new owner must have the necessary privileges for the routine to be defined. If the OWNER clause is not specified, the authorization name that was originally defined for the routine is used. USE REGISTERS Indicates that the CURRENT SCHEMA and CURRENT PATH special registers are used to define the routine. If this clause is not specified, the settings for the default schema and SQL path are the settings used when the routine is defined. CURRENT SCHEMA is used as the schema name for unqualified object names in the routine definition (including the name of the routine) and CURRENT PATH is used to resolve unqualified routines and data types in the routine definition. Examples:
PUT ROUTINE FROM procs/proc1.sar;
Usage Notes: No more than one procedure can be concurrently installed under a given schema. If a GET ROUTINE or a PUT ROUTINE operation (or their corresponding procedure) fails to execute successfully, it will always return an error (SQLSTATE 38000), along with diagnostic text providing information about the cause of the failure. For example, if the procedure name provided to GET ROUTINE does not identify an SQL procedure, diagnostic -204, 42704 text will be returned, where -204 and 42704 are the SQLCODE and SQLSTATE, respectively, that identify the cause of the problem. The SQLCODE and SQLSTATE in this example indicate that the procedure name provided in the GET ROUTINE command is undefined. Related reference:
Chapter 3. CLP Commands
609
PUT ROUTINE
v GET ROUTINE on page 485 v PUT_ROUTINE_SAR procedure in Administrative SQL Routines and Views
610
Command Reference
QUERY CLIENT
QUERY CLIENT
Returns current connection settings for an application process. Authorization: None Required connection: None Command syntax:
QUERY CLIENT
Command parameters: None Examples: The following is sample output from QUERY CLIENT:
The current connection settings of the application process are: CONNECT = 1 DISCONNECT = EXPLICIT MAX_NETBIOS_CONNECTIONS = 1 SQLRULES = DB2 SYNCPOINT = ONEPHASE CONNECT_DBPARTITIONNUM = CATALOG_DBPARTITIONNUM ATTACH_DBPARTITIONNUM = -1
If CONNECT_DBPARTITIONNUM and ATTACH_DBPARTITIONNUM are not set using the SET CLIENT command, these parameters have values identical to that of the environment variable DB2NODE. If the displayed value of the CONNECT_DBPARTITIONNUM or the ATTACH_DBPARTITIONNUM parameter is -1, the parameter has not been set; that is, either the environment variable DB2NODE has not been set, or the parameter was not specified in a previously issued SET CLIENT command. Usage notes: The connection settings for an application process can be queried at any time during execution. Related reference: v sqleqryc API - Query client connection settings in Administrative API Reference v SET CLIENT on page 716
611
QUIESCE
QUIESCE
Forces all users off the specified instance and database and puts it into a quiesced mode. While the database instance or database is in quiesced mode, you can perform administrative tasks on it. After administrative tasks are complete, use the UNQUIESCE command to activate the instance and database and allow other users to connect to the database but avoid having to shut down and perform another database start. In this mode, only users with authority in this restricted mode are allowed to attach or connect to the instance/database. Users with sysadm, sysmaint, and sysctrl authority always have access to an instance while it is quiesced, and users with sysadm and dbadm authority always have access to a database while it is quiesced. Scope: QUIESCE DATABASE results in all objects in the database being in the quiesced mode. Only the allowed user/group and sysadm, sysmaint, dbadm, or sysctrl will be able to access the database or its objects. QUIESCE INSTANCE instance-name means the instance and the databases in the instance instance-name will be in quiesced mode. The instance will be accessible just for sysadm, sysmaint, and sysctrl and allowed user/group. If an instance is in quiesced mode, a database in the instance cannot be put in quiesced mode. Authorization: One of the following: For database level quiesce: v sysadm v dbadm For instance level quiesce: v sysadm v sysctrl Required connection: Database (Database connection is not required for an instance quiesce.) Command syntax:
QUIESCE DATABASE DB IMMEDIATE DEFER WITH TIMEOUT
minutes
612
Command Reference
QUIESCE
FORCE CONNECTIONS
QUIESCE INSTANCE
minutes
Command parameters: DEFER Wait for applications until they commit the current unit of work. WITH TIMEOUT Specifies a time, in minutes, to wait for applications to commit the current unit of work. If no value is specified, in a single-partition database environment, the default value is 10 minutes. In a partitioned database environment the value specified by the start_stop_timeout database manager configuration parameter will be used. IMMEDIATE Do not wait for the transactions to be committed, immediately rollback the transactions. FORCE CONNECTIONS Force the connections off. DATABASE Quiesce the database. All objects in the database will be placed in quiesced mode. Only specified users in specified groups and users with sysadm, sysmaint, and sysctrl authority will be able to access to the database or its objects. INSTANCE instance-name The instance instance-name and the databases in the instance will be placed in quiesced mode. The instance will be accessible only to users with sysadm, sysmaint, and sysctrl authority and specified users in specified groups. USER user-name Specifies the name of a user who will be allowed access to the instance while it is quiesced. GROUP group-name Specifies the name of a group that will be allowed access to the instance while the instance is quiesced. Examples: In the following example, the default behavior is to force connections, so it does not need to be explicitly stated and can be removed from this example.
db2 quiesce instance crankarm user frank immediate force connections
613
QUIESCE
The following example forces off all users with connections to the database.
db2 quiesce db immediate
v The first example will quiesce the instance crankarm, while allowing user frank to continue using the database. The second example will quiesce the database you are attached to, preventing access by all users except those with one of the following authorities: sysadm, sysmaint, sysctrl, or dbadm. v This command will force all users off the database or instance if FORCE CONNECTION option is supplied. FORCE CONNECTION is the default behavior; the parameter is allowed in the command for compatibility reasons. v The command will be synchronized with the FORCE and will only complete once the FORCE has completed. Usage notes: v After QUIESCE INSTANCE, only users with sysadm, sysmaint, or sysctrl authority or a user name and group name provided as parameters to the command can connect to the instance. v After QUIESCE DATABASE, users with sysadm, sysmaint, sysctrl, or dbadm authority, and GRANT/REVOKE privileges can designate who will be able to connect. This information will be stored permanently in the database catalog tables. For example,
grant quiesce_connect on database to <username/groupname> revoke quiesce_connect on database from <username/groupname>
Related reference: v UNQUIESCE on page 754 v db2DatabaseQuiesce API - Quiesce the database in Administrative API Reference v db2InstanceQuiesce API - Quiesce instance in Administrative API Reference v QUIESCE DATABASE command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
614
Command Reference
Command parameters: TABLE tablename Specifies the unqualified table name. The table cannot be a system catalog table. schema.tablename Specifies the qualified table name. If schema is not provided, the CURRENT SCHEMA will be used. The table cannot be a system catalog table.
615
Usage notes: This command is not supported for declared temporary tables. A quiesce is a persistent lock. Its benefit is that it persists across transaction failures, connection failures, and even across system failures (such as power failure, or reboot). A quiesce is owned by a connection. If the connection is lost, the quiesce remains, but it has no owner, and is called a phantom quiesce. For example, if a power outage caused a load operation to be interrupted during the delete phase, the table spaces for the loaded table would be left in delete pending, quiesce exclusive state. Upon
616
Command Reference
Once completed, the new connection owns the quiesce, and the load operation can be restarted. There is a limit of five quiescers on a table space at any given time. A quiescer can upgrade the state of a table space from a less restrictive state to a more restrictive one (for example, S to U, or U to X). If a user requests a state lower than one that is already held, the original state is returned. States are not downgraded. Related reference: v sqluvqdp API - Quiesce table spaces for a table in Administrative API Reference v LIST TABLESPACES on page 550 v LOAD on page 557 v QUIESCE on page 612 v UNQUIESCE on page 754 v QUIESCE TABLESPACES FOR TABLE command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
617
QUIT
QUIT
Exits the command line processor interactive input mode and returns to the operating system command prompt. If a batch file is being used to input commands to the command line processor, commands are processed until QUIT, TERMINATE, or the end-of-file is encountered. Authorization: None Required connection: None Command syntax:
QUIT
Command parameters: None Usage notes: QUIT does not terminate the command line processor back-end process or break a database connection. CONNECT RESET breaks a connection, but does not terminate the back-end process. The TERMINATE command does both. Related reference: v TERMINATE on page 744
618
Command Reference
REBIND
REBIND
Allows the user to recreate a package stored in the database without the need for a bind file. Authorization: One of the following: v sysadm or dbadm authority v ALTERIN privilege on the schema v BIND privilege on the package. The authorization ID logged in the BOUNDBY column of the SYSCAT.PACKAGES system catalog table, which is the ID of the most recent binder of the package, is used as the binder authorization ID for the rebind, and for the default schema for table references in the package. This default qualifier can be different from the authorization ID of the user executing the rebind request. REBIND will use the same bind options that were specified when the package was created. Required connection: Database. If no database connection exists, and if implicit connect is enabled, a connection to the default database is made. Command syntax:
REBIND PACKAGE RESOLVE ANY CONSERVATIVE package-name VERSION version-name
Command parameters: PACKAGE package-name The qualified or unqualified name that designates the package to be rebound. VERSION version-name The specific version of the package to be rebound. When the version is not specified, it is taken to be (the empty string). RESOLVE Specifies whether rebinding of the package is to be performed with or without conservative binding semantics. This affects whether new functions and data types are considered during function resolution and type resolution on static DML statements in the package. This option is not supported by DRDA. Valid values are: ANY Any of the functions and types in the SQL path are considered for function and type resolution. Conservative binding semantics are not used. This is the default.
CONSERVATIVE Only functions and types in the SQL path that were defined before
Chapter 3. CLP Commands
619
REBIND
the last explicit bind time stamp are considered for function and type resolution. Conservative binding semantics are used. This option is not supported for an inoperative package. REOPT Specifies whether to have DB2 optimize an access path using values for host variables, parameter markers, and special registers. NONE The access path for a given SQL statement containing host variables, parameter markers or special registers will not be optimized using real values for these variables. The default estimates for the these variables will be used instead, and this plan is cached and used subsequently. This is the default behavior. ONCE The access path for a given SQL statement will be optimized using the real values of the host variables, parameter markers or special registers when the query is first executed. This plan is cached and used subsequently. ALWAYS The access path for a given SQL statement will always be compiled and reoptimized using the values of the host variables, parameter markers or special registers known at each execution time. Usage notes: REBIND does not automatically commit the transaction following a successful rebind. The user must explicitly commit the transaction. This enables what if analysis, in which the user updates certain statistics, and then tries to rebind the package to see what changes. It also permits multiple rebinds within a unit of work. The REBIND command will commit the transaction if auto-commit is enabled. This command: v Provides a quick way to recreate a package. This enables the user to take advantage of a change in the system without a need for the original bind file. For example, if it is likely that a particular SQL statement can take advantage of a newly created index, the REBIND command can be used to recreate the package. REBIND can also be used to recreate packages after RUNSTATS has been executed, thereby taking advantage of the new statistics. v Provides a method to recreate inoperative packages. Inoperative packages must be explicitly rebound by invoking either the bind utility or the rebind utility. A package will be marked inoperative (the VALID column of the SYSCAT.PACKAGES system catalog will be set to X) if a function instance on which the package depends is dropped. v Gives users control over the rebinding of invalid packages. Invalid packages will be automatically (or implicitly) rebound by the database manager when they are executed. This might result in a noticeable delay in the execution of the first SQL request for the invalid package. It may be desirable to explicitly rebind invalid packages, rather than allow the system to automatically rebind them, in order to eliminate the initial delay and to prevent unexpected SQL error messages which might be returned in case the implicit rebind fails. For example, following migration, all packages stored in the database will be invalidated by the DB2 Version 8 migration process. Given that this might involve a large number of
620
Command Reference
REBIND
packages, it may be desirable to explicitly rebind all of the invalid packages at one time. This explicit rebinding can be accomplished using BIND, REBIND, or the db2rbind tool). If multiple versions of a package (many versions with the same package name and creator) exist, only one version can be rebound at once. If not specified in the VERSION option, the package version defaults to be . Even if there exists only one package with a name that matches, it will not be rebound unless its version matches the one specified or the default. The choice of whether to use BIND or REBIND to explicitly rebind a package depends on the circumstances. It is recommended that REBIND be used whenever the situation does not specifically require the use of BIND, since the performance of REBIND is significantly better than that of BIND. BIND must be used, however: v When there have been modifications to the program (for example, when SQL statements have been added or deleted, or when the package does not match the executable for the program). v When the user wishes to modify any of the bind options as part of the rebind. REBIND does not support any bind options. For example, if the user wishes to have privileges on the package granted as part of the bind process, BIND must be used, since it has a grant option. v When the package does not currently exist in the database. v When detection of all bind errors is desired. REBIND only returns the first error it detects, whereas the BIND command returns the first 100 errors that occur during binding. REBIND is supported by DB2 Connect. If REBIND is executed on a package that is in use by another user, the rebind will not occur until the other users logical unit of work ends, because an exclusive lock is held on the packages record in the SYSCAT.PACKAGES system catalog table during the rebind. When REBIND is executed, the database manager recreates the package from the SQL statements stored in the SYSCAT.STATEMENTS system catalog table. If REBIND encounters an error, processing stops, and an error message is returned. REBIND will re-explain packages that were created with the explsnap bind option set to YES or ALL (indicated in the EXPLAIN_SNAPSHOT column in the SYSCAT.PACKAGES catalog table entry for the package) or with the explain bind option set to YES or ALL (indicated in the EXPLAIN_MODE column in the SYSCAT.PACKAGES catalog table entry for the package). The Explain tables used are those of the REBIND requester, not the original binder. If an SQL statement was found to be in error and the BIND option SQLERROR CONTINUE was specified, the statement will be marked as invalid even if the problem has been corrected. REBIND will not change the state of an invalid statement. In a package bound with VALIDATE RUN, a statement can change from static to incremental bind or incremental bind to static across a REBIND depending on whether or not object existence or authority problems exist during the REBIND. Rebinding a package with REOPT ONCE/ALWAYS might change static and dynamic statement compilation and performance.
621
REBIND
If REOPT is not specified, REBIND will preserve the existing REOPT value used at precompile or bind time. Related reference: v BIND on page 355 v RUNSTATS on page 702 v db2rbind - Rebind all packages on page 230 v sqlarbnd API - Rebind package in Administrative API Reference
622
Command Reference
RECONCILE
RECONCILE
Validates the references to files for the DATALINK data of a table. The rows for which the references to files cannot be established are copied to the exception table (if specified), and modified in the input table. Reconcile produces a message file (reconcil.msg) in the instance path on UNIX based systems, and in the install path on Windows platforms. This file will contain warning and error messages that are generated during validation of the exception table. Authorization: One of the following: v sysadm v sysctrl v sysmaint v dbadm v CONTROL privilege on the table. Required connection: Database Command syntax:
RECONCILE table-name DLREPORT filename FOR EXCEPTION table-name
Command parameters: RECONCILE table-name Specifies the table on which reconciliation is to be performed. An alias, or the fully qualified or unqualified table name can be specified. A qualified table name is in the form schema.tablename. If an unqualified table name is specified, the table will be qualified with the current authorization ID. DLREPORT filename Specifies the file that will contain information about the files that are unlinked during reconciliation. The name must be fully qualified (for example, /u/johnh/report). The reconcile utility appends a .ulk extension to the specified file name (for example, report.ulk). When no table is provided with the FOR EXCEPTION clause, a .exp file extension is appended to the exception report file. FOR EXCEPTION table-name Specifies the exception table into which rows that encounter link failures for DATALINK values are to be copied. If no table is specified, an exception report file is generated in the directory specified in the DLREPORT option. Examples: The following command reconciles the table DEPT, and writes exceptions to the exception table EXCPTAB, which was created by the user. Information about files that were unlinked during reconciliation is written into the file report.ulk, which
Chapter 3. CLP Commands
623
RECONCILE
is created in the directory /u/johnh. If FOR EXCEPTION excptab had not been specified, the exception information would have been written to the file report.exp, created in the /u/johnh directory.
db2 reconcile dept dlreport /u/johnh/report for exception excptab
Usage notes: During reconciliation, attempts are made to link files which exist according to table data, but which do not exist according to Data Links File Manager metadata, if no other conflict exists. A required DB2 Data Links Manager is one which has a referenced DATALINK value in the table. Reconcile tolerates the unavailability of a required DB2 Data Links Manager as well as other DB2 Data Links Managers that are configured to the database but are not part of the table data. Reconciliation is performed with respect to all DATALINK data in the table. If file references cannot be reestablished, the violating rows are inserted into the exception table (if specified). These rows are not deleted from the input table. To ensure file reference integrity, the offending DATALINK values are nulled. If the column is defined as not nullable, the DATALINK values are replaced by a zero length URL. If a file is linked under a DATALINK column defined with WRITE PERMISSION ADMIN and modified but not yet committed (that is, the file is still in the update-in-progress state), the reconciliation process renames the modified file to a filename with .mod as the suffix. It also removes the file from the update-in-progress state. If the DATALINK column is defined with RECOVERY YES, the previous archive version is restored. If an exception table is not specified, the host name, file name, column ID, and reason code for each of the DATALINK column values for which file references could not be reestablished are copied to an exception report file (<filename>.exp). If the file reference could not be reestablished because the DB2 Data Links Manager is unavailable or was dropped from the database using the DROP DATALINKS MANAGER command, the file name reported in the exception report file is not the full file name. The prefix will be missing. For example, if the original DATALINK value was http://host.com/dlfs/x/y/a.b, the value reported in the exception table will be http://host.com/x/y/a.b. The prefix name dlfs will not be included. If the DATALINK column is defined with RECOVERY YES, the previous archive version is restored. At the end of the reconciliation process, the table is taken out of datalink reconcile pending (DRP) state only if reconcile processing is complete on all the required DB2 Data Links Managers. If reconcile processing is pending on any of the required DB2 Data Links Managers (because they were unavailable), the table will remain, or be placed, in DRP state. If for some reason, an exception occurred on one of the affected Data Links Managers such that the reconciliation could not be completed successfully, the table might also be placed in a DRNP state, for which further manual intervention will be required before full referential integrity for that table can be restored. The exception table, if specified, should be created before the reconcile utility is run. The exception table used with the reconcile utility is identical to the exception table used by the load utility.
624
Command Reference
RECONCILE
The exception table mimics the definition of the table being reconciled. It can have one or two optional columns following the data columns. The first optional column is the TIMESTAMP column. It will contain the time stamp for when the reconcile operation was started. The second optional column should be of type CLOB (32KB or larger). It will contain the IDs of columns with link failures, and the reasons for those failures. The DATALINK columns in the exception table should specify NO LINK CONTROL. This ensures that a file is not linked when a row (with a DATALINK column) is inserted, and that an access token is not generated when rows are selected from the exception table. Information in the MESSAGE column is organized according to the following structure:
----------------------------------------------------------------Field number Content Size Comments ----------------------------------------------------------------1 Number of violations 5 characters Right justified padded with 0 ----------------------------------------------------------------2 Type of violation 1 character L - DATALINK violation ----------------------------------------------------------------3 Length of violation 5 characters Right justified padded with 0 ----------------------------------------------------------------4 Number of violating 4 characters Right justified DATALINK columns padded with 0 ----------------------------------------------------------------5 DATALINK column number 4 characters Right justified of the first violating padded with 0 column ----------------------------------------------------------------6 Reason for violation 5 characters Right justified padded with 0 ----------------------------------------------------------------Repeat Fields 5 and 6 for each violating column -----------------------------------------------------------------
625
RECONCILE
00022 - Specifies that the length of the violation is 12 bytes. 0002 - Specifies that there are 2 columns in the row which encountered link failures. 0004,00002 0005,00001 - Specifies the column ID and the reason for the violation.
If the message column is present, the time stamp column must also be present.
626
Command Reference
RECOVER DATABASE
RECOVER DATABASE
Restores and rolls forward a database to a particular point in time or to the end of the logs. Scope: In a partitioned database environment, this command can only be invoked from the catalog partition. A database recover operation to a specified point in time affects all database partitions that are listed in the db2nodes.cfg file. A database recover operation to the end of logs affects the database partitions that are specified. If no partitions are specified, it affects all database partitions that are listed in the db2nodes.cfg file. Authorization: To recover an existing database, one of the following: v sysadm v sysctrl v sysmaint To recover to a new database, one of the following: v sysadm v sysctrl Required connection: To recover an existing database, a database connection is required. This command automatically establishes a connection to the specified database and will release the connection when the recover operation finishes. To recover to a new database, an instance attachment and a database connection are required. The instance attachment is required to create the database. Command syntax:
RECOVER DATABASE DB source-database-alias
USING LOCAL TIME TO isotime USING UTC TIME ON ALL DBPARTITIONNUMS END OF LOGS On Database Partition clause USER username USING USING HISTORY FILE ( password history-file , OVERFLOW LOG PATH ( log-directory , Log Overflow clause History File clause ) )
627
RECOVER DATABASE
COMPRLIB
lib-name
COMPROPTS
options-string
RESTART
Command parameters: DATABASE database-alias The alias of the database that is to be recovered. USER username The user name under which the database is to be recovered. USING password The password used to authenticate the user name. If the password is omitted, the user is prompted to enter it. TO isotime The point in time to which all committed transactions are to be recovered (including the transaction committed precisely at that time, as well as all transactions committed previously). This value is specified as a time stamp, a 7-part character string that identifies a combined date and time. The format is yyyy-mm-dd-hh.mm.ss.nnnnnn (year, month, day, hour, minutes, seconds, microseconds), expressed in Coordinated Universal Time (UTC, formerly known as GMT). UTC helps to avoid having the same time stamp associated with different logs (because of a change in time associated with daylight savings time, for example). The time stamp in a backup image is based on the local time at which the backup operation started. The CURRENT TIMEZONE special register specifies the difference between UTC and local time at the application server. The difference is represented by a time duration (a decimal number in which the first two digits represent
628
Command Reference
RECOVER DATABASE
the number of hours, the next two digits represent the number of minutes, and the last two digits represent the number of seconds). Subtracting CURRENT TIMEZONE from a local time converts that local time to UTC. USING LOCAL TIME Specifies the point in time to which to recover. This option allows the user to recover to a point in time that is the servers local time rather than UTC time. Notes: 1. If the user specifies a local time for recovery, all messages returned to the user will also be in local time. All times are converted on the server, and in partitioned database environments, on the catalog database partition. 2. The timestamp string is converted to UTC on the server, so the time is local to the servers time zone, not the clients. If the client is in one time zone and the server in another, the servers local time should be used. This is different from the local time option from the Control Center, which is local to the client. 3. If the timestamp string is close to the time change of the clock due to daylight saving time, it is important to know if the stop time is before or after the clock change, and specify it correctly. USING UTC TIME Specifies the point in time to which to recover. END OF LOGS Specifies that all committed transactions from all online archive log files listed in the database configuration parameter logpath are to be applied. ON ALL DBPARTITIONNUMS Specifies that transactions are to be rolled forward on all database partitions specified in the db2nodes.cfg file. This is the default if a database partition clause is not specified. EXCEPT Specifies that transactions are to be rolled forward on all database partitions specified in the db2nodes.cfg file, except those specified in the database partition list. ON DBPARTITIONNUM / ON DBPARTITIONNUMS Roll the database forward on a set of database partitions. db-partition-number1 Specifies a database partition number in the database partition list. db-partition-number2 Specifies the second database partition number, so that all database partitions from db-partition-number1 up to and including db-partition-number2 are included in the database partition list. USING HISTORY FILE history-file history-file ON DBPARTITIONNUM In a partitioned database environment, allows a different history file OVERFLOW LOG PATH log-directory Specifies an alternate log path to be searched for archived logs during recovery. Use this parameter if log files were moved to a location other
Chapter 3. CLP Commands
629
RECOVER DATABASE
than that specified by the logpath database configuration parameter. In a partitioned database environment, this is the (fully qualified) default overflow log path for all database partitions. A relative overflow log path can be specified for single-partition databases. The OVERFLOW LOG PATH command parameter will overwrite the value (if any) of the database configuration parameter overflowlogpath. COMPRLIB lib-name Indicates the name of the library to be used to perform the decompression. The name must be a fully qualified path referring to a file on the server. If this parameter is not specified, DB2 will attempt to use the library stored in the image. If the backup was not compressed, the value of this parameter will be ignored. If the specified library cannot be loaded, the restore operation will fail. COMPROPTS options-string Describes a block of binary data that is passed to the initialization routine in the decompression library. The DB2 database system passes this string directly from the client to the server, so any issues of byte reversal or code page conversion are handled by the decompression library. If the first character of the data block is @, the remainder of the data is interpreted by the DB2 database system as the name of a file residing on the server. The DB2 database system will then replace the contents of string with the contents of this file and pass the new value to the initialization routine instead. The maximum length for the string is 1 024 bytes. RESTART The RESTART keyword can be used if a prior RECOVER operation was interrupted or otherwise did not complete. Starting in v91, a subsequent RECOVER command will attempt to continue the previous RECOVER, if possible. Using the RESTART keyword forces RECOVER to start with a fresh restore and then rollforward to the PIT specified. log-directory ON DBPARTITIONNUM In a partitioned database environment, allows a different log path to override the default overflow log path for a specific database partition. Examples: In a single-partition database environment, where the database being recovered currently exists, and so the most recent version of the history file is available in the dftdbpath: 1. To use the latest backup image and rollforward to the end of logs using all default values:
RECOVER DB SAMPLE
2. To recover the database to a PIT, issue the following. The most recent image that can be used will be restored, and logs applied until the PIT is reached.
RECOVER DB SAMPLE TO 2001-12-31-04.00.00
3. To recover the database using a saved version of the history file. issue the following. For example, if the user needs to recover to an extremely old PIT which is no longer contained in the current history file, the user will have to provide a version of the history file from this time period. If the user has saved a history file from this time period, this version can be used to drive the recover.
RECOVER DB SAMPLE TO 1999-12-31-04.00.00 USING HISTORY FILE (/home/user/old1999files/db2rhist.asc)
630
Command Reference
RECOVER DATABASE
In a single-partition database environment, where the database being recovered does not exist, you must use the USING HISTORY FILE clause to point to a history file. 1. If you have not made any backups of the history file, so that the only version available is the copy in the backup image, the recommendation is to issue a RESTORE followed by a ROLLFORWARD. However, to use RECOVER, you would first have to extract the history file from the image to some location, for example /home/user/oldfiles/db2rhist.asc, and then issue this command. (This version of the history file does not contain any information about log files that are required for rollforward, so this history file is not useful for RECOVER.)
RECOVER DB SAMPLE TO END OF LOGS USING HISTORY FILE (/home/user/fromimage/db2rhist.asc)
2. If you have been making periodic or frequent backup copies of the history, the USING HISTORY clause should be used to point to this version of the history file. If the file is /home/user/myfiles/db2rhist.asc, issue the command:
RECOVER DB SAMPLE TO PIT USING HISTORY FILE (/home/user/myfiles/db2rhist.asc)
(In this case, you can use any copy of the history file, not necessarily the latest, as long as it contains a backup taken before the point-in-time (PIT) requested.) In a partitioned database envrionment, where the database exists on all database partitions, and the latest history file is available on dftdbpath on all database partitions: 1. To recover the database to a PIT on all nodes. DB2 will verify that the PIT is reachable on all nodes before starting any restore operations.
RECOVER DB SAMPLE TO 2001-12-31-04.00.00
2. To recover the database to this PIT on all nodes. DB2 will verify that the PIT is reachable on all nodes before starting any restore operations. The RECOVER operation on each node is identical to a single-partition RECOVER.
RECOVER DB SAMPLE TO END OF LOGS
3. Even though the most recent version of the history file is in the dftdbpath, you might want to use several specific history files. Unless otherwise specified, each database partition will use the history file found locally at /home/user/oldfiles/db2rhist.asc. The exceptions are nodes 2 and 4. Node 2 will use: /home/user/node2files/db2rhist.asc, and node 4 will use: /home/user/node4files/db2rhist.asc.
RECOVER DB SAMPLE TO 1999-12-31-04.00.00 USING HISTORY FILE (/home/user/oldfiles/db2rhist.asc, /home/user/node2files/db2rhist.asc ON DBPARTITIONNUM 2, /home/user/node4files/db2rhist.asc ON DBPARTITIONNUM 4)
4. It is possible to recover a subset of nodes instead of all nodes, however a PIT RECOVER can not be done in this case, the recover must be done to EOL.
RECOVER DB SAMPLE TO END OF LOGS ON DBPARTITIONNUMS(2 TO 4, 7, 9)
In a partitioned database environment, where the database does not exist: 1. If you have not made any backups of the history file, so that the only version available is the copy in the backup image, the recommendation is to issue a RESTORE followed by a ROLLFORWARD. However, to use RECOVER, you would first have to extract the history file from the image to some location, for example, /home/user/oldfiles/db2rhist.asc, and then issue this command.
631
RECOVER DATABASE
(This version of the history file does not contain any information about log files that are required for rollforward, so this history file is not useful for the recover.)
RECOVER DB SAMPLE TO PIT USING HISTORY FILE (/home/user/fromimage/db2rhist.asc)
2. If you have been making periodic or frequent backup copies of the history, the USING HISTORY clause should be used to point to this version of the history file. If the file is /home/user/myfiles/db2rhist.asc, you can issue the following command:
RECOVER DB SAMPLE TO END OF LOGS USING HISTORY FILE (/home/user/myfiles/db2rhist.asc)
Usage notes: v Recovering a database might require a load recovery using tape devices. If prompted for another tape, the user can respond with one of the following: c d Continue. Continue using the device that generated the warning message (for example, when a new tape has been mounted). Device terminate. Stop using the device that generated the warning message (for example, when there are no more tapes).
t Terminate. Terminate all devices. v If there is a failure during the restore portion of the recover operation, you can reissue the RECOVER DATABASE command. If the restore operation was successful, but there was an error during the rollforward operation, you can issue a ROLLFORWARD DATABASE command, since it is not necessary (and it is time-consuming) to redo the entire recover operation. v In a partitioned database environment, if there is an error during the restore portion of the recover operation, it is possible that it is only an error on a single database partition. Instead of reissuing the RECOVER DATABASE command, which restores the database on all database partitions, it is more efficient to issue a RESTORE DATABASE command for the database partition that failed, followed by a ROLLFORWARD DATABASE command. Related concepts: v Developing a backup and recovery strategy in Data Recovery and High Availability Guide and Reference Related tasks: v Using recover in Data Recovery and High Availability Guide and Reference
632
Command Reference
Command parameters: DATABASE PARTITION GROUP database partition group The name of the database partition group. This one-part name identifies a database partition group described in the SYSCAT.DBPARTITIONGROUPS catalog table. The database partition group cannot currently be undergoing redistribution. Tables in the IBMCATGROUP and the IBMTEMPGROUP database partition groups cannot be redistributed. UNIFORM Specifies that the data is uniformly distributed across hash partitions (that is, every hash partition is assumed to have the same number of rows), but the same number of hash partitions do not map to each database partition. After redistribution, all database partitions in the database partition group have approximately the same number of hash partitions. USING DISTFILE distfile If the distribution of distribution key values is skewed, use this option to achieve a uniform redistribution of data across the database partitions of a database partition group. Use the distfile to indicate the current distribution of data across the 4 096 hash partitions.
633
In the example, hash partition 2 has a weight of 112 000, and partition 3 (with a weight of 0) has no data mapping to it at all. The distfile should contain 4 096 positive integer values in character format. The sum of the values should be less than or equal to 4 294 967 295. If the path for distfile is not specified, the current directory is used. USING TARGETMAP targetmap The file specified in targetmap is used as the target distribution map. Data redistribution is done according to this file. If the path is not specified, the current directory is used. If a database partition included in the target map is not in the database partition group, an error is returned. Issue ALTER DATABASE PARTITION GROUP ADD DBPARTITIONNUM before running REDISTRIBUTE DATABASE PARTITION GROUP. If a database partition excluded from the target map is in the database partition group, that database partition will not be included in the partitioning. Such a database partition can be dropped using ALTER DATABASE PARTITION GROUP DROP DBPARTITIONNUM either before or after REDISTRIBUTE DATABASE PARTITION GROUP. CONTINUE Continues a previously failed REDISTRIBUTE DATABASE PARTITION GROUP operation. If none occurred, an error is returned. ROLLBACK Rolls back a previously failed REDISTRIBUTE DATABASE PARTITION GROUP operation. If none occurred, an error is returned. Usage notes: v When a redistribution operation is done, a message file is written to: The /sqllib/redist directory on UNIX based systems, using the following format for subdirectories and file name: database-name.database-partition-groupname.timestamp. The \sqllib\redist\ directory on Windows operating systems, using the following format for subdirectories and file name: database-name\first-eightcharacters-of-the-database-partition-group-name\date\time. v The time stamp value is the time when the command was issued.
634
Command Reference
635
REFRESH LDAP
REFRESH LDAP
Refreshes the cache on a local machine with updated information when the information in Lightweight Directory Access Protocol (LDAP) has been changed. Authorization: None Required connection: None Command syntax:
REFRESH LDAP CLI CFG DB DIR NODE DIR
Command parameters: CLI CFG Specifies that the CLI configuration is to be refreshed. This parameter is not supported on AIX or the Solaris operating system. DB DIR Specifies that the database directory is to be refreshed. NODE DIR Specifies that the node directory is to be refreshed. Usage notes: If the object in LDAP is removed during refresh, the corresponding LDAP entry on the local machine is also removed. If the information in LDAP is changed, the corresponding LDAP entry is modified accordingly. If the DB2CLI.INI file is manually updated, the REFRESH LDAP CLI CFG command must be run to update the cache for the current user. The REFRESH LDAP DB DIR and REFRESH LDAP NODE DIR commands remove the LDAP database or node entries found in the local database or node directories. The database or node entries will be added to the local database or node directories again when the user connects to a database or attaches to an instance found in LDAP, and DB2LDAPCACHE is either not set or set to YES. Related tasks: v Refreshing LDAP entries in local database and node directories in Administration Guide: Implementation
636
Command Reference
REGISTER
REGISTER
Registers the DB2 server in the network directory server. Authorization: None Required connection: None Command syntax:
REGISTER DB2 SERVER IN ADMIN LDAP path
LDAP path:
LDAP NODE AS TCPIP HOSTNAME TCPIP4 HOSTNAME TCPIP6 HOSTNAME NPIPE hostname SVCENAME svcename hostname SVCENAME svcename SECURITY SOCKS hostname SVCENAME svcename SECURITY SOCKS nodename
PROTOCOL
REMOTE
computer
INSTANCE
instance NODETYPE
OSTYPE
ostype
WITH
comments
USER
Command parameters: IN Specifies the network directory server on which to register the DB2 server. The valid value is: LDAP for an LDAP (Lightweight Directory Access Protocol) directory server.
ADMIN Specifies that an administration server node is to be registered. NODE/AS nodename Specify a short name to represent the DB2 server in LDAP. A node entry will be cataloged in LDAP using this node name. The client can attach to the server using this node name. The protocol associated with this LDAP node entry is specified through the PROTOCOL parameter. PROTOCOL Specifies the protocol type associated with the LDAP node entry. Since the database server can support more than one protocol type, this value specifies the protocol type used by the client applications. The DB2 server must be registered once per protocol. Valid values are: TCPIP, TCPIP4, TCPIP6, and NPIPE. Specify NPIPE to use Windows Named Pipes. NPIPE is only supported on Windows operating systems.
637
REGISTER
HOSTNAME hostname Specifies the TCP/IP host name (or IP address). IP address can be an IPv4 or IPv6 address when using protocol, TCPIP. IP address must be an IPv4 address when using protocol, TCPIP4. IP address must be an IPv6 address when using protocol, TCPIP6. SVCENAME svcename Specifies the TCP/IP service name or port number. SECURITY SOCKS Specifies that TCP/IP SOCKS is to be used. This parameter is only supported with IPv4. When protocol TCPIP is specified, the underlying protocol used will be IPv4. REMOTE computer Specifies the computer name of the machine on which the DB2 server resides. Specify this parameter only if registering a remote DB2 server in LDAP. The value must be the same as the value specified when adding the server machine to LDAP. For Windows operating systems, this is the computer name. For UNIX based systems, this is the TCP/IP host name. INSTANCE instance Specifies the instance name of the DB2 server. The instance name must be specified for a remote instance (that is, when a value for the REMOTE parameter has been specified). NODETYPE Specifies the node type for the database server. Valid values are: SERVER Specify the SERVER node type for a DB2 Enterprise Server Edition. This is the default. MPP DCS Specify the MPP node type for a DB2 Enterprise Server Edition Extended (partitioned database) server. Specify the DCS node type when registering a host database server.
OSTYPE ostype Specifies the operating system type of the server machine. Valid values are: AIX, NT, HPUX, SUN, MVS, OS400, VM, VSE and LINUX. If an operating system type is not specified, the local operating system type will be used for a local server and no operating system type will be used for a remote server. WITH comments Describes the DB2 server. Any comment that helps to describe the server registered in the network directory can be entered. Maximum length is 30 characters. A carriage return or a line feed character is not permitted. The comment text must be enclosed by double quotation marks. Usage notes: Register the DB2 server once for each protocol that the server supports. The REGISTER command should be issued once for each DB2 server instance to publish the server in the directory server. If the communication parameter fields are reconfigured, or the server network address changes, update the DB2 server on the network directory server.
638
Command Reference
REGISTER
To update the DB2 server in LDAP, use the UPDATE LDAP NODE command after the changes have been made. If any protocol configuration parameter is specified when registering a DB2 server locally, it will override the value specified in the database manager configuration file. If the REGISTER command is used to register a local DB2 instance in LDAP, and one or both of NODETYPE and OSTYPE are specified, they will be replaced with the values retrieved from the local system. If the REGISTER command is used to register a remote DB2 instance in LDAP, and one or both of NODETYPE and OSTYPE are not specified, the default value of SERVER and Unknown will be used, respectively. If the REGISTER command is used to register a remote DB2 server in LDAP, the computer name and the instance name of the remote server must be specified along with the communication protocol for the remote server. When registering a host database server, a value of DCS must be specified for the NODETYPE parameter. Related concepts: v Lightweight Directory Access Protocol (LDAP) overview in Administration Guide: Implementation Related tasks: v Registration of DB2 servers after installation in Administration Guide: Implementation Related reference: v db2LdapRegister API - Register the DB2 server on the LDAP server in Administrative API Reference v DEREGISTER on page 415 v UPDATE LDAP NODE on page 780
639
REGISTER XMLSCHEMA
REGISTER XMLSCHEMA
Registers an XML schema with the XML schema repository (XSR). Authorization: One of the following: v SYSADM or DBADM v IMPLICIT_SCHEMA database authority if the SQL schema does not exist v CREATEIN privilege if the SQL schema exists Required connection: Database Command syntax:
REGISTER XMLSCHEMA schema-URI FROM content-URI
WITH properties-URI
AS relational-identifier
xml-document-subclause
xml-document-subclause:
Description: schema-URI Specifies the URI, as referenced by XML instance documents, of the XML schema being registered. FROM content-URI Specifies the URI where the XML schema document is located. Only a local file specified by a file scheme URI is supported. WITH properties-URI Specifies the URI of a properties document for the XML schema. Only a local file specified by a file scheme URI is supported. AS relational-identifier Specifies a name that can be used to refer to the XML schema being registered. The relational name can be specified as a two-part SQL identifier, consisting of the SQL schema and the XML schema name, having the following format: SQLschema.name. The default relational schema, as
640
Command Reference
REGISTER XMLSCHEMA
defined in the CURRENT SCHEMA special register, is used if no schema is specified. If no name is provided, a unique value is generated. COMPLETE Indicates that there are no more XML schema documents to be added. If specified, the schema is validated and marked as usable if no errors are found. WITH schema-properties-URI Specifies the URI of a properties document for the XML schema. Only a local file specified by a file scheme URI is supported. ENABLE DECOMPOSITION Specifies that this schema is to be used for decomposing XML documents. ADD document-URI Specifies the URI of an XML schema document to be added to this schema, as the document would be referenced from another XML document. FROM content-URI Specifies the URI where the XML schema document is located. Only a local file specified by a file scheme URI is supported. WITH properties-URI Specifies the URI of a properties document for the XML schema. Only a local file specified by a file scheme URI is supported. Example:
REGISTER XMLSCHEMA http://myPOschema/PO.xsd FROM file:///c:/TEMP/PO.xsd WITH file:///c:/TEMP/schemaProp.xml AS user1.POschema
Usage notes: v Before an XML schema document can be referenced and be available for validation and annotation, it must first be registered with the XSR. This command performs the first step of the XML schema registration process, by registering the primary XML schema document. The final step of the XML schema registration process requires that the COMPLETE XMLSCHEMA command run successfully for the XML schema. Alternatively, if there are no other XML schema documents to be included, issue the REGISTER XMLSCHEMA command with the COMPLETE keyword to complete registration in one step. v When registering an XML schema in the database, a larger application heap (APPLHEAPSZ) may be required depending on the size of the XML schema. The recommended size is 1024 but larger schemas will require additional memory. Related reference: v ADD XMLSCHEMA DOCUMENT on page 339 v COMPLETE XMLSCHEMA on page 394
641
REGISTER XSROBJECT
REGISTER XSROBJECT
Registers an XML object in the database catalogs. Supported objects are DTDs and external entities. Authorization: One of the following: v SYSADM or DBADM v IMPLICIT_SCHEMA database authority if the SQL schema does not exist v CREATEIN privilege if the SQL schema exists Required connection: Database Command syntax:
REGISTER XSROBJECT system-ID PUBLIC public-id DTD EXTERNAL ENTITY FROM content-URI
AS relational-identifier
Command parameters: system-id Specifies the system ID that is specified in the XML object declaration. PUBLIC public-id Specifies an optional PUBLIC ID in the XML object declaration. FROM content-URI Specifies the URI where the content of an XML schema document is located. Only a local file specified by a file scheme URI is supported. AS relational-identifier Specifies a name that can be used to refer to the XML object being registered. The relational name can be specified as a two-part SQL identifier consisting of the relational schema and name separated by a period, for example JOHNDOE.EMPLOYEEDTD. If no relational schema is specified, the default relational schema defined in the special register CURRENT SCHEMA is used. If no name is specified, one is generated automatically. DTD Specifies that the object being registered is a Data Type Definition document (DTD).
EXTERNAL ENTITY Specifies that the object being registered is an external entity. Examples: 1. Given this sample XML document which references an external entity:
642
Command Reference
REGISTER XSROBJECT
<?xml version="1.0" standalone="no" ?> <!DOCTYPE copyright [ <!ELEMENT copyright (#PCDATA)> <!ENTITY c SYSTEM "http://www.xmlwriter.net/copyright.xml"> ]> <copyright>&c;</copyright>
Before this document can be successfully inserted into an XML column, the external entity needs to be registered. The following command registers an entity where the entity content is stored locally in C:\TEMP:
REGISTER XSROBJECT http://www.xmlwriter.net/copyright.xml FROM c:\temp\copyright.xml EXTERNAL ENTITY
Before this document can be successfully inserted into an XML column, the DTD needs to be registered. The following command registers a DTD where the DTD definition is stored locally in C:\TEMP and the relational identifier to be associated with the DTD is TEST.SUBJECTS:
REGISTER XSROBJECT http://www.xmlwriter.net/subjects.dtd FROM file:///c:/temp/subjects.dtd AS TEST.SUBJECTS DTD
3. Given this sample XML document which references a public external entity:
<?xml version="1.0" standalone="no" ?> <!DOCTYPE copyright [ <!ELEMENT copyright (#PCDATA)> <!ENTITY c PUBLIC "-//W3C//TEXT copyright//EN" "http://www.w3.org/xmlspec/copyright.xml"> ]> <copyright>&c;</copyright>
Before this document can be successfully inserted into an XML column, the public external entity needs to be registered. The following command registers an entity where the entity content is stored locally in C:\TEMP:
REGISTER XSROBJECT http://www.w3.org/xmlspec/copyright.xml PUBLIC -//W3C//TEXT copyright//EN FROM file:///c:/temp/copyright.xml EXTERNAL ENTITY
Related concepts: v XSR object registration in XML Guide v XSR objects in XML Guide Related reference: v REGISTER XMLSCHEMA on page 640
Chapter 3. CLP Commands
643
REORG INDEXES/TABLE
REORG INDEXES/TABLE
Reorganizes an index or a table. You can reorganize all indexes defined on a table by rebuilding the index data into unfragmented, physically contiguous pages. Alternatively, you have the option of reorganizing specific indexes on a range partitioned table. If you specify the CLEANUP ONLY option of the index clause, cleanup is performed without rebuilding the indexes. This command cannot be used against indexes on declared temporary tables (SQLSTATE 42995). The table option reorganizes a table by reconstructing the rows to eliminate fragmented data, and by compacting information. Scope: This command affects all database partitions in the database partition group. Authorization: One of the following: v sysadm v sysctrl v sysmaint v dbadm v CONTROL privilege on the table. Required connection: Database Command syntax:
REORG TABLE table-name Table Clause INDEXES ALL FOR TABLE table-name INDEX index-name FOR TABLE table-name
Index Clause
Table Clause:
INDEX index-name
644
Command Reference
REORG INDEXES/TABLE
KEEPDICTIONARY ALLOW NO ACCESS USE tbspace-name INDEXSCAN LONGLOBDATA ALLOW READ ACCESS ALLOW WRITE ACCESS START INPLACE ALLOW READ ACCESS NOTRUNCATE TABLE RESUME STOP PAUSE RESETDICTIONARY USE longtbspace-name
Index Clause:
Command parameters: INDEXES ALL FOR TABLE table-name Specifies the table whose indexes are to be reorganized. The table can be in a local or a remote database. INDEX index-name Specifies an individual index to be reorganized on a partitioned table. Reorganization of individual indexes are ONLY supported for non-partitioned indexes on a partitioned table. This parameter is not supported for block indexes. FOR TABLE table-name Specifies the table name location of the individual index being reorganized on a partitioned table. This parameter is optional, given that index names are unique across the database. ALLOW NO ACCESS Specifies that no other users can access the table while the indexes are being reorganized. ALLOW READ ACCESS Specifies that other users can have read-only access to the table while the indexes are being reorganized. This access level is not supported for REORG INDEXES of a partitioned table unless the CLEANUP ONLY option is specified. ALLOW WRITE ACCESS Specifies that other users can read from and write to the table while the indexes are being reorganized. This access level is not supported for multi-dimensionally clustered (MDC) tables, partitioned tables, extended indexes, or tables containing a column with the XML data type unless the CLEANUP ONLY option is specified. When no ACCESS mode is specified, one will be chosen for you in the following way:
Chapter 3. CLP Commands
645
REORG INDEXES/TABLE
Table 10. Default table access chosen based on the command, table type and additional parameters specified for the index clause:
Command REORG INDEXES REORG INDEXES REORG INDEXES REORG INDEX Table type non-partitioned table partitioned table partitioned table partitioned table Additional parameters specified for index clause any none specified CLEANUP ONLY specified any Default access mode ALLOW READ ACCESS ALLOW NO ACCESS ALLOW READ ACCESS ALLOW READ ACCESS
CLEANUP ONLY When CLEANUP ONLY is requested, a cleanup rather than a full reorganization will be done. The indexes will not be rebuilt and any pages freed up will be available for reuse by indexes defined on this table only. The CLEANUP ONLY PAGES option will search for and free committed pseudo empty pages. A committed pseudo empty page is one where all the keys on the page are marked as deleted and all these deletions are known to be committed. The number of pseudo empty pages in an indexes can be determined by running runstats and looking at the NUM EMPTY LEAFS column in SYSCAT.INDEXES. The PAGES option will clean the NUM EMPTY LEAFS if they are determined to be committed. The CLEANUP ONLY ALL option will free committed pseudo empty pages, as well as remove committed pseudo deleted keys from pages that are not pseudo empty. This option will also try to merge adjacent leaf pages if doing so will result in a merged leaf page that has at least PCTFREE free space on the merged leaf page, where PCTFREE is the percent free space defined for the index at index creation time. The default PCTFREE is ten percent. If two pages can be merged, one of the pages will be freed. The number of pseudo deleted keys in an index , excluding those on pseudo empty pages, can be determined by running runstats and then selecting the NUMRIDS DELETED from SYSCAT.INDEXES. The ALL option will clean the NUMRIDS DELETED and the NUM EMPTY LEAFS if they are determined to be committed. ALL Specifies that indexes should be cleaned up by removing committed pseudo deleted keys and committed pseudo empty pages. Specifies that committed pseudo empty pages should be removed from the index tree. This will not clean up pseudo deleted keys on pages that are not pseudo empty. Since it is only checking the pseudo empty leaf pages, it is considerably faster than using the ALL option in most cases. CONVERT If you are not sure whether the table you are operating on has a type-1 or type-2 index, but want type-2 indexes, you can use the CONVERT option. If the index is type 1, this option will convert it into type 2. If the index is already type 2, this option has no effect. All indexes created by DB2 prior to Version 8 are type-1 indexes. All indexes created by Version 8 are Type 2 indexes, except when
PAGES
646
Command Reference
REORG INDEXES/TABLE
you create an index on a table that already has a type 1 index. In this case the new index will also be of type 1. Using the INSPECT command to determine the index type can be slow. CONVERT allows you to ensure that the new index will be Type 2 without your needing to determine its original type. Use the ALLOW READ ACCESS or ALLOW WRITE ACCESS option to allow other transactions either read-only or read-write access to the table while the indexes are being reorganized. While ALLOW READ ACCESS and ALLOW WRITE ACCESS allow access to the table, during the period in which the reorganized copies of the indexes are made available, no access to the table is allowed. TABLE table-name Specifies the table to reorganize. The table can be in a local or a remote database. The name or alias in the form: schema.table-name can be used. The schema is the user name under which the table was created. If you omit the schema name, the default schema is assumed. For typed tables, the specified table name must be the name of the hierarchys root table. You cannot specify an index for the reorganization of a multidimensional clustering (MDC) table. In place reorganization of tables cannot be used for MDC tables. INDEX index-name Specifies the index to use when reorganizing the table. If you do not specify the fully qualified name in the form: schema.index-name, the default schema is assumed. The schema is the user name under which the index was created. The database manager uses the index to physically reorder the records in the table it is reorganizing. For an in place table reorganization, if a clustering index is defined on the table and an index is specified, it must be clustering index. If the in place option is not specified, any index specified will be used. If you do not specify the name of an index, the records are reorganized without regard to order. If the table has a clustering index defined, however, and no index is specified, then the clustering index is used to cluster the table. You cannot specify an index if you are reorganizing an MDC table. ALLOW NO ACCESS Specifies that no other users can access the table while the table is being reorganized. When reorganizing a partitioned table, this is the default. Reorganization of a partitioned table occurs offline. ALLOW READ ACCESS Allow only read access to the table during reorganization. This is the default for a non-partitioned table. INPLACE Reorganizes the table while permitting user access. In place table reorganization is allowed only on non-partitioned and non-MDC tables with type-2 indexes, but without extended indexes and with no indexes defined over XML columns in the table. In place table reorganization takes place asynchronously, and might not be effective immediately.
647
REORG INDEXES/TABLE
ALLOW READ ACCESS Allow only read access to the table during reorganization. ALLOW WRITE ACCESS Allow write access to the table during reorganization. This is the default behavior. NOTRUNCATE TABLE Do not truncate the table after in place reorganization. During truncation, the table is S-locked. START Start the in place REORG processing. Because this is the default, this keyword is optional. STOP Stop the in place REORG processing at its current point. PAUSE Suspend or pause in place REORG for the time being. RESUME Continue or resume a previously paused in place table reorganization. When an online reorganization is resumed and you want the same options as when the reorganization was paused, you must specify those options again while resuming. USE tbspace-name Specifies the name of a system temporary table space in which to store a temporary copy of the table being reorganized. If you do not provide a table space name, the database manager stores a working copy of the table in the table spaces that contain the table being reorganized. For an 8KB, 16KB, or 32KB table object, if the page size of the system temporary table space that you specify does not match the page size of the table spaces in which the table data resides, the DB2 database product will try to find a temporary table space of the correct size of the LONG/LOB objects. Such a table space must exist for the reorganization to succeed. When you have two temporary tablespaces of the same page size, and you specify one of them in the USE clause, they will be used in a round robin fashion if there is an index in the table being reorganized. Say you have two tablespaces, tempsace1 and tempspace2, both of the same page size and you specify tempspace1 in the REORG command with the USE option. When you perform REORG the first time, tempspace1 is used. The second time, tempspace2 is used. The third time, tempspace1 is used and so on. To avoid this, you should drop one of the temporary tablespaces. For partitioned tables, the table space is used as temporary storage for the reorganization of all the data partitions in the table. Reorganization of a partitioned table reorganizes a single data partition at a time. The amount of space required is equal to the largest data partition in the table, and not the entire table. If you do not supply a table space name for a partitioned table, the table space where each data partition is located is used for
648
Command Reference
REORG INDEXES/TABLE
temporary storage of that data partition. There must be enough free space in each data partitions table space to hold a copy of the data partition. INDEXSCAN For a clustering REORG an index scan will be used to re-order table records. Reorganize table rows by accessing the table through an index. The default method is to scan the table and sort the result to reorganize the table, using temporary table spaces as necessary. Even though the index keys are in sort order, scanning and sorting is typically faster than fetching rows by first reading the row identifier from an index. LONGLOBDATA Long field and LOB data are to be reorganized. This is not required even if the table contains long or LOB columns. The default is to avoid reorganizing these objects because it is time consuming and does not improve clustering. USE longtbspace-name This is an optional parameter, which can be used to specify the name of a temporary tablespace to be used for rebuilding long data. If no temporary tablespace is specified for either the table object or for the long objects, the objects will be constructed in the tablespace they currently reside. If a temporary tablespace is specified for the table but this parameter is not specified, then the tablespace used for base reorg data will be used, unless the page sizes differ. In this situation, the DB2 database system will attempt to choose a temporary container of the appropriate page size to create the long objects in. If USE-longtbspace is specified, USE-tbspace must also be specified. If it is not, the longtbspace argument is ignored. KEEPDICTIONARY If the COMPRESS attribute for the table is YES and the table has a compression dictionary then no new dictionary is built. All the rows processed during reorganization are subject to compression using the existing dictionary. If the COMPRESS attribute for the table is NO and the table has a compression dictionary then reorg processing will remove the dictionary and all the rows in the newly reorganized table will be in non-compressed format. It is not possible to compress long, LOB, index, or XML objects.
Table 11. REORG KEEPDICTIONARY Compress Y Y N Dictionary Exists Y N Y Result; outcome Preserve dictionary; rows compressed Build dictionary; rows compressed Preserve dictionary; all rows uncompressed No effect; all rows uncompressed
649
REORG INDEXES/TABLE
For any reinitialization or truncation of a table (such as for a replace operation), if the compress attribute for the table is NO, the dictionary is discarded if one exists. Conversely, if a dictionary exists and the compress attribute for the table is YES then a truncation will save the dictionary and not discard it. The dictionary is logged in its entirety for recovery purposes and for future support with data capture changes (i.e. replication). RESETDICTIONARY If the COMPRESS attribute for the table is YES then a new row compression dictionary is built. All the rows processed during reorganization are subject to compression using this new dictionary. This dictionary replaces any previous dictionary. If the COMPRESS attribute for the table is NO and the table does have an existing compression dictionary then reorg processing will remove the dictionary and all rows in the newly reorganized table will be in non-compressed format. It is not possible to compress long, LOB, index, or XML objects.
Table 12. REORG RESETDICTIONARY Compress Y Dictionary Exists Y Result; outcome Build new dictionary*; rows compressed Build new dictionary; rows compressed Remove dictionary; all rows uncompressed No effect; all rows uncompressed
Y N
N Y
* - If a dictionary exists and the compression attribute is enabled but there is currently insufficient data in the table to build a new dictionary, the RESETDICTIONARY operation will keep the existing dictionary. Rows which are smaller in size than the internal minimum record length and rows which do not demonstrate a savings in record length when an attempt is made to compress them are considered insufficient in this case. Examples: To reorganize a table to reclaim space and use the temporary table space mytemp1, enter the following command:
db2 reorg table homer.employee use mytemp1
To reorganize tables in a partitiongroup consisting of nodes 1, 2, 3, and 4 of a four-node system, you can enter either of the following commands:
db2 reorg table employee index empid on dbpartitionnum (1,3,4) db2 reorg table homer.employee index homer.empid on all dbpartitionnums except dbpartitionnum (2)
650
Command Reference
REORG INDEXES/TABLE
To clean up the pseudo deleted keys and pseudo empty pages in all the indexes on the EMPLOYEE table while allowing other transactions to read and update the table, enter:
db2 reorg indexes all for table homer.employee allow write access cleanup only
To clean up the pseudo empty pages in all the indexes on the EMPLOYEE table while allowing other transactions to read and update the table, enter:
db2 reorg indexes all for table homer.employee allow write access cleanup only pages
To reorganize the EMPLOYEE table using the system temporary table space TEMPSPACE1 as a work area, enter:
db2 reorg table homer.employee use tempspace1
To start, pause, and resume an in place reorg of the EMPLOYEE table with the default schema HOMER, which is specified explicitly in previous examples, enter the following commands:
db2 reorg table employee index empid inplace start db2 reorg table employee inplace pause db2 reorg table homer.employee inplace allow read access notruncate table resume
The command to resume the reorg contains additional keywords to specify read access only and to skip the truncation step, which share-locks the table. Usage notes: Restrictions: v v v v v The REORG utility does not support the use of nicknames. The REORG TABLE command is not supported for declared temporary tables. The REORG TABLE command cannot be used on views. Reorganization of a table is not compatible with range-clustered tables, because the range area of the table always remains clustered. REORG TABLE cannot be used on a partitioned table in a DMS table space while an online backup of ANY table space in which the table resides, including LOBs and indexes, is being performed. REORG TABLE cannot use an index that is based on an index extension. If a table is in reorg pending state, an inplace reorg is not allowed on the table. For partitioned tables: REORG is supported at the table level. Reorganization of an individual data partition can be achieved by detaching the data partition, reorganizing the resulting non-partitioned table and then re-attaching the data partition. The table must have an ACCESS_MODE in SYSCAT.TABLES of Full Access. Reorganization skips data partitions that are in a restricted state due to an attach or detach operation If an error occurs, the non-partitioned indexes of the table are marked bad, and are rebuilt on the next access to the table. If a reorganization operation fails, some data partitions may be in a reorganized state and others may not. When the REORG TABLE command is reissued, all the data partitions will be reorganized regardless of the data partitions reorganization state.
v v v
651
REORG INDEXES/TABLE
When reorganizing indexes on partitioned tables, it is recommended that you perform a runstats operation after asynchronous index cleanup is complete in order to generate accurate index statistics in the presence of detached data partitions. To determine whether or not there are detached data partitions in the table, you can check the status field in SYSDATAPARTITIONS and look for the value I (index cleanup) or D (detached with dependant MQT). Information about the current progress of table reorganization is written to the history file for database activity. The history file contains a record for each reorganization event. To view this file, execute the LIST HISTORY command for the database that contains the table you are reorganizing. You can also use table snapshots to monitor the progress of table reorganization. Table reorganization monitoring data is recorded regardless of the Database Monitor Table Switch setting. If an error occurs, an SQLCA dump is written to the history file. For an in-place table reorganization, the status is recorded as PAUSED. When an indexed table has been modified many times, the data in the indexes might become fragmented. If the table is clustered with respect to an index, the table and index can get out of cluster order. Both of these factors can adversely affect the performance of scans using the index, and can impact the effectiveness of index page prefetching. REORG INDEX or REORG INDEXES can be used to reorganize one or all of the indexes on a table. Index reorganization will remove any fragmentation and restore physical clustering to the leaf pages. Use REORGCHK to help determine if an index needs reorganizing. Be sure to complete all database operations and release all locks before invoking index reorganization. This can be done by issuing a COMMIT after closing all cursors opened WITH HOLD, or by issuing a ROLLBACK. Indexes might not be optimal following an in-place REORG TABLE operation, since only the data object and not the indexes are reorganized. It is recommended that you perform a REORG INDEXES after an in place REORG TABLE operation. Indexes are completely rebuilt during the last phase of a classic REORG TABLE, however, so reorganizing indexes is not necessary. Tables that have been modified so many times that data is fragmented and access performance is noticeably slow are candidates for the REORG TABLE command. You should also invoke this utility after altering the inline length of a structured type column in order to benefit from the altered inline length. Use REORGCHK to determine whether a table needs reorganizing. Be sure to complete all database operations and release all locks before invoking REORG TABLE. This can be done by issuing a COMMIT after closing all cursors opened WITH HOLD, or by issuing a ROLLBACK. After reorganizing a table, use RUNSTATS to update the table statistics, and REBIND to rebind the packages that use this table. The reorganize utility will implicitly close all the cursors. If the table contains mixed row format because the table value compression has been activated or deactivated, an offline table reorganization can convert all the existing rows into the target row format. If the table is distributed across several database partitions, and the table or index reorganization fails on any of the affected database partitions, only the failing database partitions will have the table or index reorganization rolled back.
652
Command Reference
REORG INDEXES/TABLE
If the reorganization is not successful, temporary files should not be deleted. The database manager uses these files to recover the database. If the name of an index is specified, the database manager reorganizes the data according to the order in the index. To maximize performance, specify an index that is often used in SQL queries. If the name of an index is not specified, and if a clustering index exists, the data will be ordered according to the clustering index. The PCTFREE value of a table determines the amount of free space designated per page. If the value has not been set, the utility will fill up as much space as possible on each page. To complete a table space roll-forward recovery following a table reorganization, both regular and large table spaces must be enabled for roll-forward recovery. If the table contains LOB columns that do not use the COMPACT option, the LOB DATA storage object can be significantly larger following table reorganization. This can be a result of the order in which the rows were reorganized, and the types of table spaces used (SMS or DMS). Related concepts: v Data row compression in Administration Guide: Implementation v Table reorganization in Performance Guide Related reference: v db2Reorg API - Reorganize an index or a table in Administrative API Reference v GET SNAPSHOT on page 487 v REBIND on page 619 v REORGCHK on page 654 v REORG INDEXES/TABLE command using the ADMIN_CMD procedure in Administrative SQL Routines and Views v SNAPTAB_REORG administrative view and SNAP_GET_TAB_REORG table function Retrieve table reorganization snapshot information in Administrative SQL Routines and Views v REORGCHK_TB_STATS procedure Retrieve table statistics for reorganization evaluation in Administrative SQL Routines and Views v REORGCHK_IX_STATS procedure Retrieve index statistics for reorganization evaluation in Administrative SQL Routines and Views
653
REORGCHK
REORGCHK
Calculates statistics on the database to determine if tables or indexes, or both, need to be reorganized or cleaned up. Scope: This command can be issued from any database partition in the db2nodes.cfg file. It can be used to update table and index statistics in the catalogs. Authorization: One of the following: v sysadm or dbadm authority v CONTROL privilege on the table. Required connection: Database Command syntax:
UPDATE STATISTICS REORGCHK CURRENT STATISTICS ON SCHEMA schema-name USER TABLE SYSTEM ALL table-name ON TABLE USER
Command parameters: UPDATE STATISTICS Calls the RUNSTATS routine to update table and index statistics, and then uses the updated statistics to determine if table or index reorganization is required. If a portion of the table resides on the database partition where REORGCHK has been issued, the utility executes on this database partition. If the table does not exist on this database partition, the request is sent to the first database partition in the database partition group that holds a portion of the table. RUNSTATS then executes on this database partition. CURRENT STATISTICS Uses the current table statistics to determine if table reorganization is required. ON SCHEMA schema-name Checks all the tables created under the specified schema. ON TABLE USER Checks the tables that are owned by the run time authorization ID. SYSTEM Checks the system tables. ALL Checks all user and system tables.
654
Command Reference
REORGCHK
table-name Specifies the table to check. The fully qualified name or alias in the form: schema.table-name must be used. The schema is the user name under which the table was created. If the table specified is a system catalog table, the schema is SYSIBM. For typed tables, the specified table name must be the name of the hierarchys root table. Examples: Issue the following command against the SAMPLE database:
db2 reorgchk update statistics on table system
In the resulting output, the terms for the table statistics (formulas 1-3) mean: CARD (CARDINALITY) Number of rows in base table. OV NP FP (OVERFLOW) Number of overflow rows. (NPAGES) Number of pages that contain data. (FPAGES) Total number of pages.
ACTBLK Total number of active blocks for a multidimensional clustering (MDC) table. This field is only applicable to tables defined using the ORGANIZE BY clause. It indicates the number of blocks of the table that contain data. TSIZE Table size in bytes. Calculated as the product of the number of rows in the table (CARD) and the average row length. The average row length is computed as the sum of the average column lengths (AVGCOLLEN in SYSCOLUMNS) plus 10 bytes of row overhead. For long fields and LOBs only the approximate length of the descriptor is used. The actual long field or LOB data is not counted in TSIZE. TABLEPAGESIZE Page size of the table space in which the table data resides. NPARTITIONS Number of partitions if this is a partitioned table, otherwise 1. F1 F2 F3 Results of Formula 1. Results of Formula 2. Results of Formula 3. This formula indicates the amount of space that is wasted in a table. This is measured in terms of the number of empty pages and the number of pages that include data that exists in the pages of a table. In multi-dimensional clustering (MDC) tables, the number of empty blocks and the number of blocks that include data is measured.
REORG Each hyphen (-) displayed in this column indicates that the calculated results were within the set bounds of the corresponding formula, and each asterisk (*) indicates that the calculated results exceeded the set bounds of its corresponding formula. v - or * on the left side of the column corresponds to F1 (Formula 1) v - or * in the middle of the column corresponds to F2 (Formula 2) v - or * on the right side of the column corresponds to F3 (Formula 3).
655
REORGCHK
Table reorganization is suggested when the results of the calculations exceed the bounds set by the formula. For example, --- indicates that, since the formula results of F1, F2, and F3 are within the set bounds of the formula, no table reorganization is suggested. The notation *-* indicates that the results of F1 and F3 suggest table reorganization, even though F2 is still within its set bounds. The notation *-- indicates that F1 is the only formula exceeding its bounds. The table name is truncated to 30 characters, and the > symbol in the thirty-first column represents the truncated portion of the table name. An * suffix to a table name indicates it is an MDC table. An * suffix to an index name indicates it is an MDC dimension index. The terms for the index statistics (formulas 4-8) mean: INDCARD (INDEX CARDINALITY) Number of index entries in the index. This could be different than table cardinality for some indexes. For example, for indexes on XML columns the index cardinality is likely greater than the table cardinality. LEAF ELEAF Number of pseudo empty index leaf pages (NUM_EMPTY_LEAFS) A pseudo empty index leaf page is a page on which all the RIDs are marked as deleted, but have not been physically removed. NDEL Number of pseudo deleted RIDs (NUMRIDS_DELETED) A pseudo deleted RID is a RID that is marked deleted. This statistic reports pseudo deleter RIDs on leaf pages that are not pseudo empty. It does not include RIDs marked as deleted on leaf pages where all the RIDs are marked deleted. KEYS Number of unique index entries that are not marked deleted (FULLKEYCARD) LEAF_RECSIZE Record size of the index entry on a leaf page. This is the average size of the index entry excluding any overhead and is calculated from the average column length of all columns participating in the index. NLEAF_RECSIZE Record size of the index entry on a non-leaf page. This is the average size of the index entry excluding any overhead and is calculated from the average column length of all columns participating in the index except any INCLUDE columns. LEAF_PAGE_OVERHEAD Reserved space on the index leaf page for internal use. NLEAF_PAGE_OVERHEAD Reserved space on the index non-leaf page for internal use. INDEXPAGESIZE Page size of the table space in which the index resides, specified at the time of index or table creation. If not specified, INDEXPAGESIZE has the same value as TABLEPAGESIZE. LVLS Number of index levels (NLEVELS) Total number of index leaf pages (NLEAF).
656
Command Reference
REORGCHK
PCTFREE Specifies the percentage of each index page to leave as free space, a value that is assigned when defining the index. Values can range from 0 to 99. The default value is 10. LEAF_RECSIZE_OVERHEAD Index record overhead on a leaf page. For indexes on tables in LARGE table spaces the overhead is 11 for partitioned tables and 9 for other tables. For indexes on tables in REGULAR table spaces these values are 9 for partitioned tables and 7 for others. The only exception to these rules are XML paths and XML regions indexes where the overhead is always 9. This information is also available in the table below for easy reference. NLEAF_RECSIZE_OVERHEAD Index record overhead on a non-leaf page. For indexes on tables in LARGE table spaces the overhead is 14 for partitioned tables and 12 for other tables. For indexes on tables in REGULAR table spaces these values are 12 for partitioned tables and 10 for others. The only exception to these rules are XML paths and XML regions indexes where the overhead is always 12. This information is also available in the table below for easy reference. DUPKEYSIZE Size of duplicate keys on index leaf pages. For indexes on tables in LARGE table spaces the DUPKEYSIZE is 9 for partitioned tables and 7 for other tables. For indexes on tables in REGULAR table spaces these values are 7 for partitioned tables and 5 for others. The only exception to these rules are XML paths and XML regions indexes where the DUPKEYSIZE is always 7. This information is also available in the table below for easy reference.
Table 13. LEAF_RECSIZE_OVERHEAD, NLEAF_RECSIZE_OVERHEAD, and DUPKEYSIZE values are a function of index type, table partitioning, and table space type
Variable Data in REGULAR table space Regular Table XML paths or regions index LEAF_RECSIZE_OVERHEAD NLEAF_RECSIZE_OVERHEAD DUPKEYSIZE 9 12 7 All other indexes Partitioned Table All indexes Data in LARGE table space** Regular Table XML paths or regions index 9 12 7 All other indexes Partitioned Table All indexes
7 10 5
9 12 7
9 12 7
11 14 9
** For indexes on tables in large table spaces the indexes will be assumed to have large RIDs. This may cause some of the formulas to give inaccurate results if the table space of the table was converted to large but the indexes have not yet been recreated or reorganized. F4 F5 Results of Formula 4. Results of Formula 5. The notation +++ indicates that the result exceeds 999, and is invalid. Rerun REORGCHK with the UPDATE STATISTICS option, or issue RUNSTATS, followed by the REORGCHK command. Note: This formula is not supported for indexes on XML columns. F6 Results of Formula 6. The notation +++ indicates that the result exceeds 999, and might be invalid. Rerun REORGCHK with the UPDATE
657
REORGCHK
STATISTICS option, or issue RUNSTATS, followed by the REORGCHK command. If the statistics are current and valid, you should reorganize. Note: This formula is not supported for indexes on XML columns. F7 F8 Results of Formula 7. Results of Formula 8.
REORG Each hyphen (-) displayed in this column indicates that the calculated results were within the set bounds of the corresponding formula, and each asterisk (*) indicates that the calculated result exceeded the set bounds of its corresponding formula. v - or * on the left column corresponds to F4 (Formula 4) v - or * in the second from left column corresponds to F5 (Formula 5) v - or * in the middle column corresponds to F6 (Formula 6). v - or * in the second column from the right corresponds to F7 (Formula 7) v - or * on the right column corresponds to F8 (Formula 8). Index reorganization advice is as follows: v If the results of the calculations for Formula 1,2 and 3 do not exceed the bounds set by the formula and the results of the calculations for Formula 4,5 or 6 do exceed the bounds set, then index reorganization is recommended. v If only the results of the calculations Formula 7 exceed the bounds set, but the results of Formula 1,2,3,4,5 and 6 are within the set bounds, then cleanup of the indexes using the CLEANUP ONLY option of index reorganization is recommended. v If the only calculation result to exceed the set bounds is the that of Formula 8, then a cleanup of the pseudo empty pages of the indexes using the CLEANUP ONLY PAGES option of index reorganization is recommended. On a partitioned table the results for formulas (5 to 8) can be misleading if when the statistics are collected, there are index keys in the non-partitioned index belonging to detached data partitions which require cleanup. When there are detached partitions on a partitioned table, such index keys will not be counted as part of the keys in the statistics because they are invisible and no longer part of the table. They will eventually get removed from the index by asynchronous index cleanup. As a result, statistics collected before asynchronous index cleanup is run will be misleading. If the REORGCHK command is issued before asynchronous index cleanup completes, it will likely generate a false alarm for index reorganization or index cleanup based on the inaccurate statistics. Once asynchronous index cleanup is run, all the index keys that still belong to detached data partitions which require cleanup will be removed and this may eliminate the need for index reorganization. For partitioned tables, you are encouraged to issue the REORGCHK after an asynchronous index cleanup has completed in order to generate accurate index statistics in the presence of detached data partitions. To determine whether or not there are detached data partitions in the table, you can check the status field in the SYSDATAPARTITIONS table and look for the value I (index cleanup) or D (detached with dependant MQT).
658
Command Reference
REORGCHK
Usage notes: This command does not display declared temporary table statistical information. This utility does not support the use of nicknames. Unless you specify the CURRENT STATISTICS option, REORGCHK gathers statistics on all columns using the default options only. Specifically, column group are not gathered and if LIKE statistics were previously gathered, they are not gathered by REORGCHK. The statistics gathered depend on the kind of statistics currently stored in the catalog tables: v If detailed index statistics are present in the catalog for any index, table statistics and detailed index statistics (without sampling) for all indexes are collected. v If detailed index statistics are not detected, table statistics as well as regular index statistics are collected for every index. v If distribution statistics are detected, distribution statistics are gathered on the table. If distribution statistics are gathered, the number of frequent values and quantiles are based on the database configuration parameter settings. REORGCHK calculates statistics obtained from eight different formulas to determine if performance has deteriorated or can be improved by reorganizing a table or its indexes. When a table uses less than or equal to ( NPARTITIONS * 1 extent size ) of pages, no table reorganization is recommended based on each formula. More specifically, v For non-partitioned tables ( NPARTITIONS =1 ), the threshold is:
(FPAGES <= 1 extent size)
v In a multi-partitioned database, after the number of database partitions in a database partition group of the table is factored in, this threshold for not recommending table reorganization changes to:
FPAGES <= number of database partitions in a database partition group of the table * NPARTITIONS * 1 extent size
Long field or LOB data is not accounted for while calculating TSIZE. REORGCHK uses the following formulas to analyze the physical location of rows and the size of the table: v Formula F1:
100*OVERFLOW/CARD < 5
The total number of overflow rows in the table should be less than 5 percent of the total number of rows. Overflow rows can be created when rows are updated and the new rows contain more bytes than the old ones (VARCHAR fields), or when columns are added to existing tables. v Formula F2: For regular tables:
100*TSIZE / ((FPAGES-NPARTITIONS) * (TABLEPAGESIZE-68)) > 70
The table size in bytes (TSIZE) should be more than 70 percent of the total space allocated for the table. (There should be less than 30% free space.) The total space allocated for the table depends upon the page size of the table space in which the table resides (minus an overhead of 68 bytes). Because the last page allocated in the data object is not usually filled, 1 is subtracted from FPAGES for each partition (which is the same as FPAGES-NPARTITIONS).
Chapter 3. CLP Commands
659
REORGCHK
For MDC tables:
100*TSIZE / ((ACTBLK-FULLKEYCARD) * EXTENTSIZE * (TABLEPAGESIZE-68)) > 70
FULLKEYCARD represents the cardinality of the composite dimension index for the MDC table. Extentsize is the number of pages per block. The formula checks if the table size in bytes is more than the 70 percent of the remaining blocks for a table after subtracting the minimum required number of blocks. v Formula F3:
100*NPAGES/FPAGES > 80
The number of pages that contain no rows at all should be less than 20 percent of the total number of pages. (Pages can become empty after rows are deleted.) As noted above, no table reorganization is recommended when (FPAGES <= NPARTITIONS * 1 extent size). Therefore, F3 is not calculated. For non-partitioned tables, NPARTITIONS = 1. In a multi-partitioned database, this condition changes to FPAGES = number of database partitions in a database partition group of the table * NPARTITIONS * 1 extent size. For MDC tables, the formla is:
100 * activeblocks / ( ( fpages / ExtentSize ) - 1 )
REORGCHK uses the following formulas to analyze the indexes and their the relationship to the table data: v Formula F4: For non-partitioned tables:
CLUSTERRATIO or normalized CLUSTERFACTOR > 80
The global CLUSTERFACTOR and CLUSTERRATIO take into account the correlation between the index key and distribution key. The clustering ratio of an index should be greater than 80 percent. When multiple indexes are defined on one table, some of these indexes have a low cluster ratio. (The index sequence is not the same as the table sequence.) This cannot be avoided. Be sure to specify the most important index when reorganizing the table. The cluster ratio is usually not optimal for indexes that contain many duplicate keys and many entries. For partitioned tables:
AVGPARTITION_CLUSTERRATIO or normalized AVGPARTITION _CLUSTERFACTOR > 80
AVGPARTITION_CLUSTERFACTOR and AVGPARITITON_CLUSTERRATIO values reflect how clustered data is within a data partition with respect to an index key. A partitioned table can be perfectly clustered for a particular index key within each data partition, and still have a low value for the CLUSTERFACTOR and CLUSTERRATIO because the index key is not a prefix of the table partitioning key. Design your tables and indexes using the most important index keys as a prefix of the table partitioning key. In addition, because the optimizer uses the global clusteredness values to make decisions about queries that span multiple data partitions, it is possible to perform a clustering reorganization and have the optimizer still not choose the clustering index when the keys do not agree. v Formula F5:
100*( KEYS*(LEAF_RECSIZE+LEAF_RECSIZE_OVERHEAD)+ (INDCARD-KEYS)*DUPKEYSIZE ) / ( (NLEAF-NUM_EMPTY_LEAFS-1)* (INDEXPAGESIZE-LEAF_PAGE_OVERHEAD) ) > MIN(50,(100 - PCTFREE))
The space in use at the leaf level of the index should be greater than the minimum of 50 and 100-PCTFREE percent (only checked when NLEAF>1). v Formula F6:
660
Command Reference
REORGCHK
( 100-PCTFREE ) * ( (FLOOR((100 - LEVEL2PCTFREE) / 100 * (INDEXPAGESIZE - NLEAF_PAGE_OVERHEAD)/(NLEAF_RECSIZE + NLEAF_RECSIZE_OVERHEAD)))* (FLOOR((100 - MIN(10, LEVEL2PCTFREE))/100*(INDEXPAGESIZE - NLEAF_PAGE_OVERHEAD)/ (NLEAF_RECSIZE + NLEAF_RECSIZE_OVERHEAD)) ** (NLEVELS - 3)) * (INDEXPAGESIZE - LEAF_PAGE_OVERHEAD))/(KEYS*(LEAF_RECSIZE+LEAF_RECSIZE_OVERHEAD)+ (INDCARD - KEYS) * DUPKEYSIZE ) ) < 100
To determine if recreating the index would result in a tree having fewer levels. This formula checks the ratio between the amount of space in an index tree that has one less level than the current tree, and the amount of space needed. If a tree with one less level could be created and still leave PCTFREE available, then a reorganization is recommended. The actual number of index entries should be more than (100-PCTFREE) percent of the number of entries an NLEVELS-1 index tree can handle (only checked if NLEVELS>2). In the case where NLEVELS = 2, the other REORGCHK formulas should be relied upon to determine if the index should be reorganized. In simplified form, formula F6 can be rewriten as the following:
Amount of space needed for an index if it was one level smaller --------------------------------------------------------------- < 1 Amount of space needed for all the entries in the index
When the above left part is > 1, it means all index entries in the existing index can fit into an index that is one level smaller than the existing index. In this case, a reorg index is recommended. The amount of space needed for an NLEVELS-1 index is calculated by:
(The max number of leaf pages that a NLEVELS-1 index can have) * (Amount of space available to store index entries per leaf page)
where,
The max (No. of (No. of (No. of number of leaf pages that a NLEVELS-1 index can have = entries a level 2 index page can have) * entries per page on levels greater than 2) ** levels in the intended index - 2) =
(100 - LEVEL2PCTFREE) { FLOOR( [----------------------------] * 100 (PageSize - Overhead) [-------------------------------------------] ) * (Avg. size of each nonleaf index entry) (100 - MIN(10, LEVEL2PCTFREE)) FLOOR([------------------------------------] * 100 (PageSize - Overhead) [----------------------------------------------------])** (Avg. size of each nonleaf index entry) (NLEVELS-3) } (100 - LEVEL2PCTFREE) is the percentage of used space on level 2 of the index. Level 2 is the level immediately above the leaf level. (100 - MIN(10, LEVEL2PCTFREE)) is the percentage of used space on all levels above the second level. NLEVELS is the number of index levels in the existing index. The amount of space available to store index entries per leaf page = ((100-PCTFREE)/100 * (INDEXPAGESIZE - LEAF_PAGE_OVERHEAD)) =
Chapter 3. CLP Commands
661
REORGCHK
( Used space per page * (PageSize - Overhead) ) The amount of space needed for all index entries: KEYS * (LEAF_RECSIZE + LEAF_RECSIZE_OVERHEAD) + (INDCARD - KEYS) * DUPKEYSIZE
(KEYS * (LEAF_RECSIZE + LEAF_RECSIZE_OVERHEAD)) represents the space used for the first occurrence of each key value in the index and ((INDCARD KEYS) * DUPKEYSIZE) represents the space used for subsequent (duplicate) occurrences of a key value. v Formula F7:
100 * (NUMRIDS_DELETED / (NUMRIDS_DELETED + INDCARD)) < 20
The number of pseudo-deleted RIDs on non-pseudo-empty pages should be less than 20 percent. v Formula F8:
100 * (NUM_EMPTY_LEAFS/NLEAF) < 20
The number of pseudo-empty leaf pages should be less than 20 percent of the total number of leaf pages. Running statistics on many tables can take time, especially if the tables are large. Related concepts: v Table reorganization in Performance Guide Related reference: v REORGCHK_IX_STATS procedure Retrieve index statistics for reorganization evaluation in Administrative SQL Routines and Views v REORGCHK_TB_STATS procedure Retrieve table statistics for reorganization evaluation in Administrative SQL Routines and Views
662
Command Reference
FOR NODE
Command parameters: FOR NODE Enter the name of an administrator node to reset DAS configuration parameters there. USER username USING password If connection to the remote system requires user name and password, enter this information. Usage notes: To reset the DAS configuration parameters on a remote system, specify the system using the administrator node name as an argument to the FOR NODE option and specify the user name and password if the connection to that node requires username and password authorization. To view or print a list of the DAS configuration parameters, use the GET ADMIN CONFIGURATION command. To change the value of an admin parameter, use the UPDATE ADMIN CONFIGURATION command. Changes to the DAS configuration parameters that can be updated on-line take place immediately. Other changes become effective only after they are loaded into memory when you restart the DAS with the db2admin command.
Chapter 3. CLP Commands
663
664
Command Reference
DATABASE MANAGER DB MANAGER DBM DATABASES TABLESPACES CONTAINERS DATABASE TABLESPACE name CONTAINER name FOR tablespace-id
ON database alias
Command parameters: DATABASE MANAGER Resets alert settings for the database manager. DATABASES Resets alert settings for all databases managed by the database manager. These are the settings that apply to all databases that do not have custom settings. Custom settings are defined using the DATABASE ON database alias clause. CONTAINERS Resets default alert settings for all table space containers managed by the database manager to the install default. These are the settings that apply to all table space containers that do not have custom settings. Custom settings are defined using the CONTAINER name ON database alias clause. CONTAINER name FOR tablespace-id FOR tablespace-id ON database alias Resets the alert settings for the table space container called name, for the table space specified using the FOR tablespace-id clause, on the database
665
If you do not specify this option, all health indicators for the specified object or object type will be reset. Related tasks: v Configuring health indicators using a client application in System Monitor Guide and Reference Related reference: v db2ResetAlertCfg API - Reset the alert configuration of health indicators in Administrative API Reference v RESET ALERT CONFIGURATION command using the ADMIN_CMD procedure in Administrative SQL Routines and Views v HEALTH_GET_ALERT_ACTION_CFG table function Retrieve health alert action configuration settings in Administrative SQL Routines and Views v HEALTH_GET_ALERT_CFG table function Retrieve health alert configuration settings in Administrative SQL Routines and Views v UPDATE ALERT CONFIGURATION on page 758
666
Command Reference
Command parameters: FOR database-alias Specifies the alias of the database whose configuration is to be reset to the system defaults. Usage notes: To view or print a list of the database configuration parameters, use the GET DATABASE CONFIGURATION command. To change the value of a configurable parameter, use the UPDATE DATABASE CONFIGURATION command. Changes to the database configuration file become effective only after they are loaded into memory. All applications must disconnect from the database before this can occur. If an error occurs, the database configuration file does not change. The database configuration file cannot be reset if the checksum is invalid. This might occur if the database configuration file is changed without using the appropriate command. If this happens, the database must be restored to reset the database configuration file. The RESET DATABASE CONFIGURATION command will reset the database configuration parameters to the pre-database configuration values, where
667
668
Command Reference
Command parameters: None Usage notes: This command resets all parameters set by the installation program. This could cause error messages to be returned when restarting DB2. For example, if the SVCENAME parameter is reset, the user will receive the SQL5043N error message when trying to restart DB2. Before running this command, save the output from the GET DATABASE MANAGER CONFIGURATION command to a file so that you can refer to the existing settings. Individual settings can then be updated using the UPDATE DATABASE MANAGER CONFIGURATION command. It is not recommended that the SVCENAME parameter, set by the installation program, be modified by the user. To view or print a list of the database manager configuration parameters, use the GET DATABASE MANAGER CONFIGURATION command. To change the value of a configurable parameter, use the UPDATE DATABASE MANAGER CONFIGURATION command. For more information about these parameters, refer to the summary list of configuration parameters and the individual parameters. Some changes to the database manager configuration file become effective only after they are loaded into memory. For more information on which parameters are configurable on-line and which ones are not, see the configuration parameter summary. Server configuration parameters that are not reset immediately are reset during execution of db2start. For a client configuration parameter, parameters are reset the next time you restart the application. If the client is the command line processor, it is necessary to invoke TERMINATE.
669
670
Command Reference
RESET MONITOR
RESET MONITOR
Resets the internal database system monitor data areas of a specified database, or of all active databases, to zero. The internal database system monitor data areas include the data areas for all applications connected to the database, as well as the data areas for the database itself. Authorization: One of the following: v sysadm v sysctrl v sysmaint v sysmon Required connection: Instance. If there is no instance attachment, a default instance attachment is created. To reset the monitor switches for a remote instance (or a different local instance), it is necessary to first attach to that instance. Command syntax:
RESET MONITOR ALL DCS FOR DCS DATABASE DB database-alias
AT DBPARTITIONNUM GLOBAL
db-partition-number
Command parameters: ALL This option indicates that the internal counters should be reset for all databases.
FOR DATABASE database-alias This option indicates that only the database with alias database-alias should have its internal counters reset. DCS Depending on which clause it is specified, this keyword resets the internal counters of: v All DCS databases v A specific DCS database.
AT DBPARTITIONNUM db-partition-number Specifies the database partition for which the status of the monitor switches is to be displayed. global Returns an aggregate result for all database partitions in a partitioned database environment. Usage notes:
671
RESET MONITOR
Each process (attachment) has its own private view of the monitor data. If one user resets, or turns off a monitor switch, other users are not affected. Change the setting of the monitor switch configuration parameters to make global changes to the monitor switches. If ALL is specified, some database manager information is also reset to maintain consistency of the returned data, and some database partition-level counters are reset. Compatibilities: For compatibility with versions earlier than Version 8: v The keyword NODE can be substituted for DBPARTITIONNUM. Related reference: v db2ResetMonitor API - Reset the database system monitor data in Administrative API Reference v GET MONITOR SWITCHES on page 478 v GET SNAPSHOT on page 487
672
Command Reference
RESTART DATABASE
RESTART DATABASE
Restarts a database that has been abnormally terminated and left in an inconsistent state. At the successful completion of RESTART DATABASE, the application remains connected to the database if the user has CONNECT privilege. Scope: This command affects only the node on which it is executed. Authorization: None Required connection: This command establishes a database connection. Command syntax:
RESTART DATABASE DB database-alias
WRITE RESUME
Command parameters: DATABASE database-alias Identifies the database to restart. USER username Identifies the user name under which the database is to be restarted. USING password The password used to authenticate username. If the password is omitted, the user is prompted to enter it. DROP PENDING TABLESPACES tablespace-name Specifies that the database restart operation is to be successfully completed even if table space container problems are encountered. If a problem occurs with a container for a specified table space during the restart process, the corresponding table space will not be available (it will be in drop-pending state) after the restart operation. If a table space is in the drop-pending state, the only possible action is to drop the table space. In the case of circular logging, a troubled table space will cause a restart failure. A list of troubled table space names can found in the administration notification log if a restart database operation fails because of container problems. If there is only one system temporary table space in
673
RESTART DATABASE
the database, and it is in drop pending state, a new system temporary table space must be created immediately following a successful database restart operation. WRITE RESUME Allows you to force a database restart on databases that failed while I/O writes were suspended. Before performing crash recovery, this option will resume I/O writes by removing the SUSPEND_WRITE state from every table space in the database. The WRITE RESUME option can also be used in the case where the connection used to suspend I/O writes is currently hung and all subsequent connection attempts are also hanging. When used in this circumstance, RESTART DATABASE will resume I/O writes to the database without performing crash recovery. RESTART DATABASE with the WRITE RESUME option will only perform crash recovery when you use it after a database crash. The WRITE RESUME parameter can only be applied to the primary database, not to mirrored databases. Usage notes: Execute this command if an attempt to connect to a database returns an error message, indicating that the database must be restarted. This action occurs only if the previous session with this database terminated abnormally (due to power failure, for example). At the completion of RESTART DATABASE, a shared connection to the database is maintained if the user has CONNECT privilege, and an SQL warning is issued if any indoubt transactions exist. In this case, the database is still usable, but if the indoubt transactions are not resolved before the last connection to the database is dropped, another RESTART DATABASE must be issued before the database can be used again. Use the LIST INDOUBT TRANSACTIONS command to generate a list of indoubt transactions. If the database is only restarted on a single node within an MPP system, a message might be returned on a subsequent database query indicating that the database needs to be restarted. This occurs because the database partition on a node on which the query depends must also be restarted. Restarting the database on all nodes solves the problem. Related tasks: v Resolving indoubt transactions manually in Administration Guide: Planning Related reference: v LIST INDOUBT TRANSACTIONS on page 538 v db2DatabaseRestart API - Restart database in Administrative API Reference
674
Command Reference
RESTORE DATABASE
RESTORE DATABASE
The RESTORE DATABASE command recreates a damaged or corrupted database that has been backed up using the DB2 backup utility. The restored database is in the same state that it was in when the backup copy was made. This utility can also overwrite a database with a different image or restore the backup copy to a new database. For information on the restore operations supported by DB2 database systems between different operating systems and hardware platforms, see Backup and restore operations between different operating systems and hardware platforms in the Related concepts section. The restore utility can also be used to restore backup images that were produced on DB2 UDB Verson 8. If a migration is required, it will be invoked automatically at the end of the restore operation. If, at the time of the backup operation, the database was enabled for rollforward recovery, the database can be brought to its previous state by invoking the rollforward utility after successful completion of a restore operation. This utility can also restore a table space level backup. Incremental images and images only capturing differences from the time of the previous capture (called a delta image) cannot be restored when there is a difference in operating systems or word size (32-bit or 64-bit). Following a successful restore operation from one environment to a different environment, no incremental or delta backups are allowed until a non-incremental backup is taken. (This is not a limitation following a restore operation within the same environment.) Even with a successful restore operation from one environment to a different environment, there are some considerations: packages must be rebound before use (using the BIND command, the REBIND command, or the db2rbind utility); SQL procedures must be dropped and re-created; and all external libraries must be rebuilt on the new platform. (These are not considerations when restoring to the same environment.) Scope: This command only affects the node on which it is executed. Authorization: To restore to an existing database, one of the following: v sysadm v sysctrl v sysmaint To restore to a new database, one of the following: v sysadm v sysctrl
675
RESTORE DATABASE
Required connection: The required connection will vary based on the type of restore action: v You require a database connection, to restore to an existing database. This command automatically establishes an exclusive connection to the specified database. v You require an instance and a database connection, to restore to a new database. The instance attachment is required to create the database. To restore to a new database at an instance different from the current instance, it is necessary to first attach to the instance where the new database will reside. The new instance can be local or remote. The current instance is defined by the value of the DB2INSTANCE environment variable. Command syntax:
RESTORE DATABASE DB source-database-alias restore-options CONTINUE ABORT
restore-options:
USER
REBUILD WITH
EXCEPT
rebuild-tablespace-clause
TABLESPACE ONLINE )
USE
TSM XBSA ,
OPTIONS
options-string @ file-name
OPEN
num-sessions
SESSIONS
FROM LOAD
TAKEN AT date-time
INTO
target-database-alias
LOGTARGET directory
NEWLOGPATH
directory
WITH
num-buffers
BUFFERS
BUFFER
buffer-size
DLREPORT
filename
REPLACE EXISTING
676
Command Reference
RESTORE DATABASE
PARALLELISM n
COMPRLIB name
COMPROPTS
string
WITHOUT DATALINK
WITHOUT PROMPTING
rebuild-tablespace-clause:
, TABLESPACE ( tablespace-name )
Command parameters: DATABASE source-database-alias Alias of the source database from which the backup was taken. CONTINUE Specifies that the containers have been redefined, and that the final step in a redirected restore operation should be performed. ABORT This parameter: v Stops a redirected restore operation. This is useful when an error has occurred that requires one or more steps to be repeated. After RESTORE DATABASE with the ABORT option has been issued, each step of a redirected restore operation must be repeated, including RESTORE DATABASE with the REDIRECT option. v Terminates an incremental restore operation before completion. USER username Identifies the user name under which the database is to be restored. USING password The password used to authenticate the user name. If the password is omitted, the user is prompted to enter it. REBUILD WITH ALL TABLESPACES IN DATABASE Restores the database with all the table spaces known to the database at the time of the image being restored. This restore overwrites a database if it already exists. REBUILD WITH ALL TABLESPACES IN DATABASE EXCEPT rebuild-tablespace-clause Restores the database with all the table spaces known to the database at the time of the image being restored except for those specified in the list. This restore overwrites a database if it already exists. REBUILD WITH ALL TABLESPACES IN IMAGE Restores the database with only the table spaces in the image being restored. This restore overwrites a database if it already exists. REBUILD WITH ALL TABLESPACES IN IMAGE EXCEPT rebuild-tablespaceclause Restores the database with only the table spaces in the image being restored except for those specified in the list. This restore overwrites a database if it already exists. REBUILD WITH rebuild-tablespace-clause Restores the database with only the list of table spaces specified. This restore overwrites a database if it already exists.
677
RESTORE DATABASE
TABLESPACE tablespace-name A list of names used to specify the table spaces that are to be restored. ONLINE This keyword, applicable only when performing a table space-level restore operation, is specified to allow a backup image to be restored online. This means that other agents can connect to the database while the backup image is being restored, and that the data in other table spaces will be available while the specified table spaces are being restored. HISTORY FILE This keyword is specified to restore only the history file from the backup image. COMPRESSION LIBRARY This keyword is specified to restore only the compression library from the backup image. If the object exists in the backup image, it will be restored into the database directory. If the object does not exist in the backup image, the restore operation will fail. LOGS This keyword is specified to restore only the set of log files contained in the backup image. If the backup image does not contain any log files, the restore operation will fail. If this option is specified, the LOGTARGET option must also be specified. INCREMENTAL Without additional parameters, INCREMENTAL specifies a manual cumulative restore operation. During manual restore the user must issue each restore command manually for each image involved in the restore. Do so according to the following order: last, first, second, third and so on up to and including the last image. INCREMENTAL AUTOMATIC/AUTO Specifies an automatic cumulative restore operation. INCREMENTAL ABORT Specifies abortion of an in-progress manual cumulative restore operation. USE TSM Specifies that the database is to be restored from TSM-managed output. OPTIONS options-string Specifies options to be used for the restore operation.The string will be passed to the vendor support library, for example TSM, exactly as it was entered, without the quotes. Specifying this option overrides the value specified by the VENDOROPT database configuration parameter. @file-name Specifies that the options to be used for the restore operation are contained in a file located on the DB2 server. The string will be passed to the vendor support library, for example TSM. The file must be a fully qualified file name. OPEN num-sessions SESSIONS Specifies the number of I/O sessions that are to be used with TSM or the vendor product. USE XBSA Specifies that the XBSA interface is to be used. Backup Services APIs
678
Command Reference
RESTORE DATABASE
(XBSA) are an open application programming interface for applications or facilities needing data storage management for backup or archiving purposes. FROM directory/device The fully qualified path name of the directory or device on which the backup image resides. If USE TSM, FROM, and LOAD are omitted, the default value is the current working directory of the client machine. This target directory or device must exist on the target server/instance. On Windows operating systems, the specified directory must not be a DB2-generated directory. For example, given the following commands:
db2 backup database sample to c:\backup db2 restore database sample from c:\backup
Using these commands, the DB2 database system generates subdirectories under the c:\backup directory to allow more than one backup to be placed in the specified top level directory. The DB2 generated subdirectories should be ignored. To specify precisely which backup image to restore, use the TAKEN AT parameter. There can be several backup images stored on the same path. If several items are specified, and the last item is a tape device, the user is prompted for another tape. Valid response options are: c Continue. Continue using the device that generated the warning message (for example, continue when a new tape has been mounted). Device terminate. Stop using only the device that generated the warning message (for example, terminate when there are no more tapes). Terminate. Abort the restore operation after the user has failed to perform some action requested by the utility.
LOAD shared-library The name of the shared library (DLL on Windows operating systems) containing the vendor backup and restore I/O functions to be used. The name can contain a full path. If the full path is not given, the value defaults to the path on which the user exit program resides. TAKEN AT date-time The time stamp of the database backup image. The time stamp is displayed after successful completion of a backup operation, and is part of the path name for the backup image. It is specified in the form yyyymmddhhmmss. A partial time stamp can also be specified. For example, if two different backup images with time stamps 20021001010101 and 20021002010101 exist, specifying 20021002 causes the image with time stamp 20021002010101 to be used. If a value for this parameter is not specified, there must be only one backup image on the source media. TO target-directory This parameter states the target database directory. This parameter is ignored if the utility is restoring to an existing database. The drive and directory that you specify must be local. If the backup image contains a database that is enabled for automatic storage then only the database directory changes, the storage paths associated with the database do not change.
679
RESTORE DATABASE
DBPATH ON target-directory This parameter states the target database directory. This parameter is ignored if the utility is restoring to an existing database. The drive and directory that you specify must be local. If the backup image contains a database that is enabled for automatic storage and the ON parameter is not specified then this parameter is synonymous with the TO parameter and only the database directory changes, the storage paths associated with the database do not change. ON path-list This parameter redefines the storage paths associated with an automatic storage database. Using this parameter with a database that is not enabled for automatic storage results in an error (SQL20321N). The existing storage paths as defined within the backup image are no longer used and automatic storage table spaces are automatically redirected to the new paths. If this parameter is not specified for an automatic storage database then the storage paths remain as they are defined within the backup image. One or more paths can be specified, each separated by a comma. Each path must have an absolute path name and it must exist locally. If the database does not already exist on disk and the DBPATH ON parameter is not specified then the first path is used as the target database directory. For a multi-partition database the ON path-list option can only be specified on the catalog partition. The catalog partition must be restored before any other partitions are restored when the ON option is used. The restore of the catalog-partition with new storage paths will place all non-catalog nodes in a RESTORE_PENDING state. The non-catalog nodes can then be restored in parallel without specifying the ON clause in the restore command. In general, the same storage paths must be used for each partition in a multi-partition database and they must all exist prior to executing the RESTORE DATABASE command. One exception to this is where database partition expressions are used within the storage path. Doing this allows the database partition number to be reflected in the storage path such that the resulting path name is different on each partition. You use the argument $N ([blank]$N) to indicate a database partition expression. A database partition expression can be used anywhere in the storage path, and multiple database partition expressions can be specified. Terminate the database partition expression with a space character; whatever follows the space is appended to the storage path after the database partition expression is evaluated. If there is no space character in the storage path after the database partition expression, it is assumed that the rest of the string is part of the expression. The argument can only be used in one of the following forms:
Table 14. . Operators are evaluated from left to right. % represents the modulus operator. The database partition number in the examples is assumed to be 10.
Syntax [blank]$N [blank]$N+[number] [blank]$N%[number] [blank]$N+[number]%[number] [blank]$N%[number]+[number] Example " $N" " $N+100" " $N%5" " $N+1%5" " $N%4+2" Value 10 110 0 1 4
680
Command Reference
RESTORE DATABASE
Table 14. (continued). Operators are evaluated from left to right. % represents the modulus operator. The database partition number in the examples is assumed to be 10.
Syntax
a
Example
Value
% is modulus.
INTO target-database-alias The target database alias. If the target database does not exist, it is created. When you restore a database backup to an existing database, the restored database inherits the alias and database name of the existing database. When you restore a database backup to a nonexistent database, the new database is created with the alias and database name that you specify. This new database name must be unique on the system where you restore it. LOGTARGET directory The absolute path name of an existing directory on the database server, to be used as the target directory for extracting log files from a backup image. If this option is specified, any log files contained within the backup image will be extracted into the target directory. If this option is not specified, log files contained within a backup image will not be extracted. To extract only the log files from the backup image, specify the LOGS option. NEWLOGPATH directory The absolute pathname of a directory that will be used for active log files after the restore operation. This parameter has the same function as the newlogpath database configuration parameter, except that its effect is limited to the restore operation in which it is specified. The parameter can be used when the log path in the backup image is not suitable for use after the restore operation; for example, when the path is no longer valid, or is being used by a different database. WITH num-buffers BUFFERS The number of buffers to be used. The DB2 database system will automatically choose an optimal value for this parameter unless you explicitly enter a value. A larger number of buffers can be used to improve performance when multiple sources are being read from, or if the value of PARALLELISM has been increased. BUFFER buffer-size The size, in pages, of the buffer used for the restore operation. The DB2 database system will automatically choose an optimal value for this parameter unless you explicitly enter a value. The minimum value for this parameter is 8 pages. The restore buffer size must be a positive integer multiple of the backup buffer size specified during the backup operation. If an incorrect buffer size is specified, the buffers are allocated to be of the smallest acceptable size. DLREPORT filename The file name, if specified, must be specified as an absolute path. Reports the files that become unlinked, as a result of a fast reconcile, during a restore operation. This option is only to be used if the table being restored has a DATALINK column type and linked files. REPLACE HISTORY FILE Specifies that the restore operation should replace the history file on disk with the history file from the backup image. REPLACE EXISTING If a database with the same alias as the target database alias already exists,
Chapter 3. CLP Commands
681
RESTORE DATABASE
this parameter specifies that the restore utility is to replace the existing database with the restored database. This is useful for scripts that invoke the restore utility, because the command line processor will not prompt the user to verify deletion of an existing database. If the WITHOUT PROMPTING parameter is specified, it is not necessary to specify REPLACE EXISTING, but in this case, the operation will fail if events occur that normally require user intervention. REDIRECT Specifies a redirected restore operation. To complete a redirected restore operation, this command should be followed by one or more SET TABLESPACE CONTAINERS commands, and then by a RESTORE DATABASE command with the CONTINUE option. All commands associated with a single redirected restore operation must be invoked from the same window or CLP session. A redirected restore operation cannot be performed against a table space that has automatic storage enabled. GENERATE SCRIPT script Creates a redirect restore script with the specified file name. The script name can be relative or absolute and the script will be generated on the client side. If the file cannot be created on the client side, an error message (SQL9304N) will be returned. If the file already exists, it will be overwritten. Please see the examples below for further usage information. WITHOUT ROLLING FORWARD Specifies that the database is not to be put in rollforward pending state after it has been successfully restored. If, following a successful restore operation, the database is in rollforward pending state, the ROLLFORWARD command must be invoked before the database can be used again. If this option is specified when restoring from an online backup image, error SQL2537N will be returned. If backup image is of a recoverable database then WITHOUT ROLLING FORWARD cannot be specified with REBUILD option. WITHOUT DATALINK Specifies that any tables with DATALINK columns are to be put in DataLink_Reconcile_Pending (DRP) state, and that no reconciliation of linked files is to be performed. PARALLELISM n Specifies the number of buffer manipulators that are to be created during the restore operation. The DB2 database system will automatically choose an optimal value for this parameter unless you explicitly enter a value. COMPRLIB name Indicates the name of the library to be used to perform the decompression. The name must be a fully qualified path referring to a file on the server. If this parameter is not specified, DB2 will attempt to use the library stored in the image. If the backup was not compressed, the value of this parameter will be ignored. If the specified library cannot be loaded, the restore operation will fail. COMPROPTS string Describes a block of binary data that is passed to the initialization routine in the decompression library. The DB2 database system passes this string directly from the client to the server, so any issues of byte reversal or code page conversion are handled by the decompression library. If the first
682
Command Reference
RESTORE DATABASE
character of the data block is @, the remainder of the data is interpreted by the DB2 database system as the name of a file residing on the server. The DB2 database system will then replace the contents of string with the contents of this file and pass the new value to the initialization routine instead. The maximum length for the string is 1 024 bytes. WITHOUT PROMPTING Specifies that the restore operation is to run unattended. Actions that normally require user intervention will return an error message. When using a removable media device, such as tape or diskette, the user is prompted when the device ends, even if this option is specified. Examples: 1. In the following example, the database WSDB is defined on all 4 database partitions, numbered 0 through 3. The path /dev3/backup is accessible from all database partitions. The following offline backup images are available from /dev3/backup:
wsdb.0.db2inst1.NODE0000.CATN0000.20020331234149.001 wsdb.0.db2inst1.NODE0001.CATN0000.20020331234427.001 wsdb.0.db2inst1.NODE0002.CATN0000.20020331234828.001 wsdb.0.db2inst1.NODE0003.CATN0000.20020331235235.001
To restore the catalog partition first, then all other database partitions of the WSDB database from the /dev3/backup directory, issue the following commands from one of the database partitions:
db2_all <<+0< db2 RESTORE DATABASE TAKEN AT 20020331234149 INTO wsdb REPLACE EXISTING db2_all <<+1< db2 RESTORE DATABASE TAKEN AT 20020331234427 INTO wsdb REPLACE EXISTING db2_all <<+2< db2 RESTORE DATABASE TAKEN AT 20020331234828 INTO wsdb REPLACE EXISTING db2_all <<+3< db2 RESTORE DATABASE TAKEN AT 20020331235235 INTO wsdb REPLACE EXISTING wsdb FROM /dev3/backup wsdb FROM /dev3/backup wsdb FROM /dev3/backup wsdb FROM /dev3/backup
The db2_all utility issues the restore command to each specified database partition. When performing a restore using db2_all, you should always specify REPLACE EXISTING and/or WITHOUT PROMPTING. Otherwise, if there is prompting, the operation will look like it is hanging. This is because db2_all does not support user prompting. 2. Following is a typical redirected restore scenario for a database whose alias is MYDB: a. Issue a RESTORE DATABASE command with the REDIRECT option.
restore db mydb replace existing redirect
After successful completion of step 1, and before completing step 3, the restore operation can be aborted by issuing:
restore db mydb abort
b. Issue a SET TABLESPACE CONTAINERS command for each table space whose containers must be redefined. For example:
set tablespace containers for 5 using (file f:\ts3con1 20000, file f:\ts3con2 20000)
683
RESTORE DATABASE
To verify that the containers of the restored database are the ones specified in this step, issue the LIST TABLESPACE CONTAINERS command. c. After successful completion of steps 1 and 2, issue:
restore db mydb continue
This is the final step of the redirected restore operation. d. If step 3 fails, or if the restore operation has been aborted, the redirected restore can be restarted, beginning at step 1. 3. Following is a sample weekly incremental backup strategy for a recoverable database. It includes a weekly full database backup operation, a daily non-cumulative (delta) backup operation, and a mid-week cumulative (incremental) backup operation:
(Sun) (Mon) (Tue) (Wed) (Thu) (Fri) (Sat) backup backup backup backup backup backup backup db db db db db db db mydb mydb mydb mydb mydb mydb mydb use tsm online incremental online incremental online incremental online incremental online incremental online incremental delta use delta use use tsm delta use delta use use tsm tsm tsm tsm tsm
For an automatic database restore of the images created on Friday morning, issue:
restore db mydb incremental automatic taken at (Fri)
For a manual database restore of the images created on Friday morning, issue:
restore restore restore restore restore db db db db db mydb mydb mydb mydb mydb incremental incremental incremental incremental incremental taken taken taken taken taken at at at at at (Fri) (Sun) (Wed) (Thu) (Fri)
4. To produce a backup image, which includes logs, for transportation to a remote site:
backup db sample online to /dev3/backup include logs
To restore that backup image, supply a LOGTARGET path and specify this path during ROLLFORWARD:
restore db sample from /dev3/backup logtarget /dev3/logs rollforward db sample to end of logs and stop overflow log path /dev3/logs
5. To retrieve only the log files from a backup image that includes logs:
restore db sample logs from /dev3/backup logtarget /dev3/logs
6. The USE TSM OPTIONS keywords can be used to specify the TSM information to use for the restore operation. On Windows platforms, omit the -fromowner option. v Specifying a delimited string:
restore db sample use TSM options "-fromnode=bar -fromowner=dmcinnis"
The file myoptions.txt contains the following information: -fromnode=bar -fromowner=dmcinnis 7. The following is a simple restore of a multi-partition automatic storage enabled database with new storage paths. The database was originally created with one storage path, /myPath0: v On the catalog partition issue: restore db mydb on /myPath1,/myPath2
684
Command Reference
RESTORE DATABASE
v On all non-catalog partitions issue: restore db mydb 8. A script output of the following command on a non-auto storage database:
restore db sample from /home/jseifert/backups taken at 20050301100417 redirect generate script SAMPLE_NODE0000.clp
685
RESTORE DATABASE
-- **************************************************************************** -- ** Tablespace name = USERSPACE1 -- ** Tablespace ID = 2 -- ** Tablespace Type = System managed space -- ** Tablespace Content Type = Any data -- ** Tablespace Page size (bytes) = 4096 -- ** Tablespace Extent size (pages) = 32 -- ** Using automatic storage = No -- ** Total number of pages = 1 -- **************************************************************************** SET TABLESPACE CONTAINERS FOR 2 -- IGNORE ROLLFORWARD CONTAINER OPERATIONS USING ( PATH SQLT0002.0 ); -- **************************************************************************** -- ** Tablespace name = DMS -- ** Tablespace ID = 3 -- ** Tablespace Type = Database managed space -- ** Tablespace Content Type = Any data -- ** Tablespace Page size (bytes) = 4096 -- ** Tablespace Extent size (pages) = 32 -- ** Using automatic storage = No -- ** Auto-resize enabled = No -- ** Total number of pages = 2000 -- ** Number of usable pages = 1960 -- ** High water mark (pages) = 96 -- **************************************************************************** SET TABLESPACE CONTAINERS FOR 3 -- IGNORE ROLLFORWARD CONTAINER OPERATIONS USING ( FILE /tmp/dms1 1000 , FILE /tmp/dms2 1000 ); -- **************************************************************************** -- ** Tablespace name = RAW -- ** Tablespace ID = 4 -- ** Tablespace Type = Database managed space -- ** Tablespace Content Type = Any data -- ** Tablespace Page size (bytes) = 4096 -- ** Tablespace Extent size (pages) = 32 -- ** Using automatic storage = No -- ** Auto-resize enabled = No -- ** Total number of pages = 2000 -- ** Number of usable pages = 1960 -- ** High water mark (pages) = 96 -- **************************************************************************** SET TABLESPACE CONTAINERS FOR 4 -- IGNORE ROLLFORWARD CONTAINER OPERATIONS USING ( DEVICE /dev/hdb1 1000 , DEVICE /dev/hdb2 1000 ); -- **************************************************************************** -- ** start redirect restore -- **************************************************************************** RESTORE DATABASE SAMPLE CONTINUE; -- **************************************************************************** -- ** end of file -- ****************************************************************************
686
Command Reference
RESTORE DATABASE
-- **************************************************************************** -- ** automatically created redirect restore script -- **************************************************************************** UPDATE COMMAND OPTIONS USING S ON Z ON TEST_NODE0000.out V ON; SET CLIENT ATTACH_DBPARTITIONNUM 0; SET CLIENT CONNECT_DBPARTITIONNUM 0; -- **************************************************************************** -- ** initialize redirected restore -- **************************************************************************** RESTORE DATABASE TEST -- USER <username> -- USING <password> FROM /home/jseifert/backups TAKEN AT 20050304090733 ON /home/jseifert -- DBPATH ON <target-directory> INTO TEST -- NEWLOGPATH /home/jseifert/jseifert/NODE0000/SQL00002/SQLOGDIR/ -- WITH <num-buff> BUFFERS -- BUFFER <buffer-size> -- REPLACE HISTORY FILE -- REPLACE EXISTING REDIRECT -- PARALLELISM <n> -- WITHOUT ROLLING FORWARD -- WITHOUT PROMPTING ; -- **************************************************************************** -- ** tablespace definition -- **************************************************************************** -- **************************************************************************** -- ** Tablespace name = SYSCATSPACE -- ** Tablespace ID = 0 -- ** Tablespace Type = Database managed space -- ** Tablespace Content Type = Any data -- ** Tablespace Page size (bytes) = 4096 -- ** Tablespace Extent size (pages) = 4 -- ** Using automatic storage = Yes -- ** Auto-resize enabled = Yes -- ** Total number of pages = 6144 -- ** Number of usable pages = 6140 -- ** High water mark (pages) = 5968 -- **************************************************************************** -- **************************************************************************** -- ** Tablespace name = TEMPSPACE1 -- ** Tablespace ID = 1 -- ** Tablespace Type = System managed space -- ** Tablespace Content Type = System Temporary data -- ** Tablespace Page size (bytes) = 4096 -- ** Tablespace Extent size (pages) = 32 -- ** Using automatic storage = Yes -- ** Total number of pages = 0 -- **************************************************************************** -- **************************************************************************** -- ** Tablespace name = USERSPACE1 -- ** Tablespace ID = 2 -- ** Tablespace Type = Database managed space -- ** Tablespace Content Type = Any data -- ** Tablespace Page size (bytes) = 4096 -- ** Tablespace Extent size (pages) = 32 -- ** Using automatic storage = Yes -- ** Auto-resize enabled = Yes -- ** Total number of pages = 256 -- ** Number of usable pages = 224 -- ** High water mark (pages) = 96 -- **************************************************************************** -- ****************************************************************************
Chapter 3. CLP Commands
687
RESTORE DATABASE
-- ** Tablespace name = DMS -- ** Tablespace ID = 3 -- ** Tablespace Type = Database managed space -- ** Tablespace Content Type = Any data -- ** Tablespace Page size (bytes) = 4096 -- ** Tablespace Extent size (pages) = 32 -- ** Using automatic storage = No -- ** Auto-resize enabled = No -- ** Total number of pages = 2000 -- ** Number of usable pages = 1960 -- ** High water mark (pages) = 96 -- **************************************************************************** SET TABLESPACE CONTAINERS FOR 3 -- IGNORE ROLLFORWARD CONTAINER OPERATIONS USING ( FILE /tmp/dms1 1000 , FILE /tmp/dms2 1000 ); -- **************************************************************************** -- ** Tablespace name = RAW -- ** Tablespace ID = 4 -- ** Tablespace Type = Database managed space -- ** Tablespace Content Type = Any data -- ** Tablespace Page size (bytes) = 4096 -- ** Tablespace Extent size (pages) = 32 -- ** Using automatic storage = No -- ** Auto-resize enabled = No -- ** Total number of pages = 2000 -- ** Number of usable pages = 1960 -- ** High water mark (pages) = 96 -- **************************************************************************** SET TABLESPACE CONTAINERS FOR 4 -- IGNORE ROLLFORWARD CONTAINER OPERATIONS USING ( DEVICE /dev/hdb1 1000 , DEVICE /dev/hdb2 1000 ); -- **************************************************************************** -- ** start redirect restore -- **************************************************************************** RESTORE DATABASE TEST CONTINUE; -- **************************************************************************** -- ** end of file -- ****************************************************************************
Usage notes: v A RESTORE DATABASE command of the form db2 restore db <name> will perform a full database restore with a database image and will perform a table space restore operation of the table spaces found in a table space image. A RESTORE DATABASE command of the form db2 restore db <name> tablespace performs a table space restore of the table spaces found in the image. In addition, if a list of table spaces is provided with such a command, the explicitly listed table spaces are restored. v Following the restore operation of an online backup, you must perform a roll-forward recovery. v If a backup image is compressed, the DB2 database system detects this and automatically decompresses the data before restoring it. If a library is specified on the db2Restore API, it is used for decompressing the data. Otherwise, a check is made to see if a library is stored in the backup image and if it exists it is used. Finally, if there is not library stored in the backup image, the data cannot be decompressed and the restore operation fails.
688
Command Reference
RESTORE DATABASE
v If the compression library is to be restored from a backup image (either explicitly by specifying the COMPRESSION LIBRARY option or implicitly by performing a normal restore of a compressed backup), the restore operation must be done on the same platform and operating system that the backup was taken on. If the platform the backup was taken on is not the same as the platform that the restore is being done on, the restore operation will fail, even if DB2 normally supports cross-platform restores involving the two systems. v To restore log files from the backup image that contains them, the LOGTARGET option must be specified, providing the fully qualified and valid path that exists on the DB2 server. If those conditions are satisfied, the restore utility will write the log files from the image to the target path. If a LOGTARGET is specified during a restore of a backup image that does not include logs, the restore oepration will return an error before attempting to restore any table space data. A restore operation will also fail with an error if an invalid, or read-only, LOGTARGET path is specified. v If any log files exist in the LOGTARGET path at the time the RESTORE DATABASE command is issued, a warning prompt will be returned to the user. This warning will not be returned if WITHOUT PROMPTING is specified. v During a restore operation where a LOGTARGET is specified, if any log file cannot be extracted, the restore operation will fail and return an error. If any of the log files being extracted from the backup image have the same name as an existing file in the LOGTARGET path, the restore operation will fail and an error will be returned. The restore database utility will not overwrite existing log files in the LOGTARGET directory. v You can also restore only the saved log set from a backup image. To indicate that only the log files are to be restored, specify the LOGS option in addition to the LOGTARGET path. Specifying the LOGS option without a LOGTARGET path will result in an error. If any problem occurs while restoring log files in this mode of operation, the restore operation will terminate immediately and an error will be returned. v During an automatic incremental restore operation, only the log files included in the target image of the restore operation will be retrived from the backup image. Any log files included in intermediate images referenced during the incremental restore process will not be extracted from those intermediate backup images. During a manual incremental restore operation, the LOGTARGET path should only be specified with the final restore command to be issued. v A backup targeted to be restored to another operating system or another DB2 database version must be an offline backup, and cannot be a delta or an incremental backup image. The same is true for backups to be restored to a later DB2 database version. Related concepts: v Backup and restore operations between different operating systems and hardware platforms in Data Recovery and High Availability Guide and Reference v Developing a backup and recovery strategy in Data Recovery and High Availability Guide and Reference Related tasks: v Using restore in Data Recovery and High Availability Guide and Reference Related reference: v CREATE DATABASE on page 395 v db2move - Database movement tool on page 157
Chapter 3. CLP Commands
689
REWIND TAPE
REWIND TAPE
Rewinds tapes for backup and restore operations to streaming tape devices. This command is only supported on Windows operating systems. Authorization: One of the following: v sysadm v sysctrl v sysmaint Required connection: None. Command syntax:
REWIND TAPE ON device
Command parameters: ON device Specifies a valid tape device name. The default value is \\.\TAPE0. Related reference: v INITIALIZE TAPE on page 511 v SET TAPE POSITION on page 723 v REWIND TAPE command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
690
Command Reference
ROLLFORWARD DATABASE
ROLLFORWARD DATABASE
Recovers a database by applying transactions recorded in the database log files. Invoked after a database or a table space backup image has been restored, or if any table spaces have been taken offline by the database due to a media error. The database must be recoverable (that is, the logarchmeth1 or logarchmeth2 database configuration parameters must be set to a value other than OFF) before the database can be recovered with rollforward recovery. Scope: In a partitioned database environment, this command can only be invoked from the catalog partition. A database or table space rollforward operation to a specified point in time affects all database partitions that are listed in the db2nodes.cfg file. A database or table space rollforward operation to the end of logs affects the database partitions that are specified. If no database partitions are specified, it affects all database partitions that are listed in the db2nodes.cfg file; if rollforward recovery is not needed on a particular partition, that partition is ignored. For partitioned tables, you are also required to roll forward related table spaces to the same point in time. This applies to table spaces containing data partitions of a table. If a single table space contains a portion of a partitioned table, rolling forward to the end of the logs is still allowed. Authorization: One of the following: v sysadm v sysctrl v sysmaint Required connection: None. This command establishes a database connection. Command syntax:
ROLLFORWARD DATABASE DB database-alias USER username USING password
USING UTC TIME TO isotime USING LOCAL TIME ON ALL DBPARTITIONNUMS AND COMPLETE END OF LOGS AND STOP On Database Partition clause COMPLETE STOP On Database Partition clause CANCEL USING UTC TIME QUERY STATUS USING LOCAL TIME
691
ROLLFORWARD DATABASE
TABLESPACE
NORETRIEVE
, db-partition-number1 TO db-partition-number2 )
Command parameters: DATABASE database-alias The alias of the database that is to be rollforward recovered. USER username The user name under which the database is to be rollforward recovered. USING password The password used to authenticate the user name. If the password is omitted, you will be prompted to enter it. TO isotime The point in time to which all committed transactions are to be rolled forward (including the transaction committed precisely at that time, as well as all transactions committed previously). This value is specified as a time stamp, a 7-part character string that identifies a combined date and time. The format is yyyy-mm-dd-hh.mm.ss (year, month, day, hour, minutes, seconds), expressed in Coordinated Universal Time (UTC, formerly known as GMT). UTC helps to avoid having the same time stamp associated
692
Command Reference
ROLLFORWARD DATABASE
with different logs (because of a change in time associated with daylight savings time, for example). The time stamp in a backup image is based on the local time at which the backup operation started. The CURRENT TIMEZONE special register specifies the difference between UTC and local time at the application server. The difference is represented by a time duration (a decimal number in which the first two digits represent the number of hours, the next two digits represent the number of minutes, and the last two digits represent the number of seconds). Subtracting CURRENT TIMEZONE from a local time converts that local time to UTC. USING LOCAL TIME Allows you to rollforward to a point in time that is the servers local time rather than UTC time. Notes: 1. If you specify a local time for rollforward, all messages returned to you will also be in local time. All times are converted on the server, and in partitioned database environments, on the catalog database partition. 2. The timestamp string is converted to UTC on the server, so the time is local to the servers time zone, not the clients. If the client is in one time zone and the server in another, the servers local time should be used. This is different from the local time option from the Control Center, which is local to the client. 3. If the timestamp string is close to the time change of the clock due to daylight savings, it is important to know if the stop time is before or after the clock change, and specify it correctly. 4. Subsequent ROLLFORWARD commands that cannot specify the USING LOCAL TIME clause will have all messages returned to you in local time if this option is specified. END OF LOGS Specifies that all committed transactions from all online archive log files listed in the database configuration parameter logpath are to be applied. ALL DBPARTITIONNUMS Specifies that transactions are to be rolled forward on all database partitions specified in the db2nodes.cfg file. This is the default if a database partition clause is not specified. EXCEPT Specifies that transactions are to be rolled forward on all database partitions specified in the db2nodes.cfg file, except those specified in the database partition list. ON DBPARTITIONNUM / ON DBPARTITIONNUMS Roll the database forward on a set of database partitions. db-partition-number1 Specifies a database partition number in the database partition list. db-partition-number2 Specifies the second database partition number, so that all database partitions from db-partition-number1 up to and including db-partition-number2 are included in the database partition list.
693
ROLLFORWARD DATABASE
COMPLETE / STOP Stops the rolling forward of log records, and completes the rollforward recovery process by rolling back any incomplete transactions and turning off the rollforward pending state of the database. This allows access to the database or table spaces that are being rolled forward. These keywords are equivalent; specify one or the other, but not both. The keyword AND permits specification of multiple operations at once; for example, db2 rollforward db sample to end of logs and complete. When rolling table spaces forward to a point in time, the table spaces are placed in backup pending state. CANCEL Cancels the rollforward recovery operation. This puts the database or one or more table spaces on all database partitions on which forward recovery has been started in restore pending state: v If a database rollforward operation is not in progress (that is, the database is in rollforward pending state), this option puts the database in restore pending state. v If a table space rollforward operation is not in progress (that is, the table spaces are in rollforward pending state), a table space list must be specified. All table spaces in the list are put in restore pending state. v If a table space rollforward operation is in progress (that is, at least one table space is in rollforward in progress state), all table spaces that are in rollforward in progress state are put in restore pending state. If a table space list is specified, it must include all table spaces that are in rollforward in progress state. All table spaces on the list are put in restore pending state. v If rolling forward to a point in time, any table space name that is passed in is ignored, and all table spaces that are in rollforward in progress state are put in restore pending state. v If rolling forward to the end of the logs with a table space list, only the table spaces listed are put in restore pending state. This option cannot be used to cancel a rollforward operation that is actually running. It can only be used to cancel a rollforward operation that is in progress but not actually running at the time. A rollforward operation can be in progress but not running if: v It terminated abnormally. v The STOP option was not specified. v An error caused it to fail. Some errors, such as rolling forward through a non-recoverable load operation, can put a table space into restore pending state. Use this option with caution, and only if the rollforward operation that is in progress cannot be completed because some of the table spaces have been put in rollforward pending state or in restore pending state. When in doubt, use the LIST TABLESPACES command to identify the table spaces that are in rollforward in progress state, or in rollforward pending state. QUERY STATUS Lists the log files that the database manager has rolled forward, the next archive file required, and the time stamp (in UTC) of the last committed transaction since rollforward processing began. In a partitioned database environment, this status information is returned for each database partition. The information returned contains the following fields:
694
Command Reference
ROLLFORWARD DATABASE
Database partition number Rollforward status Status can be: database or table space rollforward pending, database or table space rollforward in progress, database or table space rollforward processing STOP, or not pending. Next log file to be read A string containing the name of the next required log file. In a partitioned database environment, use this information if the rollforward utility fails with a return code indicating that a log file is missing or that a log information mismatch has occurred. Log files processed A string containing the names of processed log files that are no longer needed for recovery, and that can be removed from the directory. If, for example, the oldest uncommitted transaction starts in log file x, the range of obsolete log files will not include x; the range ends at x - 1. Last committed transaction A string containing a time stamp in ISO format (yyyy-mm-dd-hh.mm.ss) suffixed by either UTC or Local (see USING LOCAL TIME). This time stamp marks the last transaction committed after the completion of rollforward recovery. The time stamp applies to the database. For table space rollforward recovery, it is the time stamp of the last transaction committed to the database. QUERY STATUS is the default value if the TO, STOP, COMPLETE, or CANCEL clauses are omitted. If TO, STOP, or COMPLETE was specified, status information is displayed if the command has completed successfully. If individual table spaces are specified, they are ignored; the status request does not apply only to specified table spaces. TABLESPACE This keyword is specified for table space-level rollforward recovery. tablespace-name Mandatory for table space-level rollforward recovery to a point in time. Allows a subset of table spaces to be specified for rollforward recovery to the end of the logs. In a partitioned database environment, each table space in the list does not have to exist at each database partition that is rolling forward. If it does exist, it must be in the correct state. For partitioned tables, point in time roll-forward of a table space containing any piece of a partitioned table must also roll-forward all of the other table spaces in which that table resides to the same point in time. Roll-forward to the end of the logs for a single table space containing a piece of a partitioned table is still allowed. If a partitioned table has any attached or detached data partitions, then PIT rollforward must include all table spaces for these data partitions as well. To determine if a partitioned table has any attached, detached, or dropped data partitions, query the Status field of the SYSDATAPARTITIONS catalog table. Because a partitioned table can reside in multiple table spaces, it will generally be necessary to roll forward multiple table spaces. Data that is recovered via dropped table recovery is written to the export directory specified in the ROLLFORWARD DATABASE command. It is possible to
Chapter 3. CLP Commands
695
ROLLFORWARD DATABASE
roll forward all table spaces in one command, or do repeated roll forward operations for subsets of the table spaces involved. If the ROLLFORWARD DATABASE command is done for one or a few table spaces, then all data from the table that resided in those table spaces will be recovered. A warning will be written to the notify log if the ROLLFORWARD DATABASE command did not specify the full set of the table spaces necessary to recover all the data for the table. Allowing rollforward of a subset of the table spaces makes it easier to deal with cases where there is more data to be recovered than can fit into a single export directory. ONLINE This keyword is specified to allow table space-level rollforward recovery to be done online. This means that other agents are allowed to connect while rollforward recovery is in progress. OVERFLOW LOG PATH log-directory Specifies an alternate log path to be searched for archived logs during recovery. Use this parameter if log files were moved to a location other than that specified by the logpath database configuration parameter. In a partitioned database environment, this is the (fully qualified) default overflow log path for all database partitions. A relative overflow log path can be specified for single-partition databases. The OVERFLOW LOG PATH command parameter will overwrite the value (if any) of the database configuration parameter OVERFLOWLOGPATH. log-directory ON DBPARTITIONNUM In a partitioned database environment, allows a different log path to override the default overflow log path for a specific database partition. NORETRIEVE Allows you to control which log files are to be rolled forward on the standby machine by allowing you to disable the retrieval of archived logs. The benefits of this are: v By controlling the logfiles to be rolled forward, you can ensure that the standby machine is X hours behind the production machine, to avoid affecting both the systems. v If the standby system does not have access to archive (eg. if TSM is the archive, it only allows the original machine to retrieve the files) v It might also be possible that while the production system is archiving a file, the standby system is retrieving the same file, and it might then get an incomplete log file. Noretrieve would solve this problem. RECOVER DROPPED TABLE drop-table-id Recovers a dropped table during the rollforward operation. The table ID can be obtained using the LIST HISTORY command. For partitioned tables, the drop-table-id identifies the table as a whole, so that all data partitions of the table can be recovered in a single roll-forward command. TO export-directory Specifies a directory to which files containing the table data are to be written. The directory must be accessible to all database partitions. Examples: Example 1
696
Command Reference
ROLLFORWARD DATABASE
The ROLLFORWARD DATABASE command permits specification of multiple operations at once, each being separated with the keyword AND. For example, to roll forward to the end of logs, and complete, the separate commands:
db2 rollforward db sample to end of logs db2 rollforward db sample complete
Although the two are equivalent, it is recommended that such operations be done in two steps. It is important to verify that the rollforward operation has progressed as expected, before stopping it and possibly missing logs. This is especially important if a bad log is found during rollforward recovery, and the bad log is interpreted to mean the end of logs. In such cases, an undamaged backup copy of that log could be used to continue the rollforward operation through more logs. However if the rollforward AND STOP option is used, and the rollforward encounters an error, the error will be returned to you. In this case, the only way to force the rollforward to stop and come online despite the error (i.e. to come online at that point in the logs before the error) is to issue the rollforward STOP command. Example 2 Roll forward to the end of the logs (two table spaces have been restored):
db2 rollforward db sample to end of logs db2 rollforward db sample to end of logs and stop
These two statements are equivalent. Neither AND STOP or AND COMPLETE is needed for table space rollforward recovery to the end of the logs. Table space names are not required. If not specified, all table spaces requiring rollforward recovery will be included. If only a subset of these table spaces is to be rolled forward, their names must be specified. Example 3 After three table spaces have been restored, roll one forward to the end of the logs, and the other two to a point in time, both to be done online:
db2 rollforward db sample to end of logs tablespace(TBS1) online db2 rollforward db sample to 1998-04-03-14.21.56 and stop tablespace(TBS2, TBS3) online
Two rollforward operations cannot be run concurrently. The second command can only be invoked after the first rollforward operation completes successfully. Example 4 After restoring the database, roll forward to a point in time, using OVERFLOW LOG PATH to specify the directory where the user exit saves archived logs:
db2 rollforward db sample to 1998-04-03-14.21.56 and stop overflow log path (/logs)
Example 5 (partitioned database environments) There are three database partitions: 0, 1, and 2. Table space TBS1 is defined on all database partitions, and table space TBS2 is defined on database partitions 0 and 2.
Chapter 3. CLP Commands
697
ROLLFORWARD DATABASE
After restoring the database on database partition 1, and TBS1 on database partitions 0 and 2, roll the database forward on database partition 1:
db2 rollforward db sample to end of logs and stop
This returns warning SQL1271 (Database is recovered but one or more table spaces are off-line on database partition(s) 0 and 2.).
db2 rollforward db sample to end of logs
This rolls TBS1 forward on database partitions 0 and 2. The clause TABLESPACE(TBS1) is optional in this case. Example 6 (partitioned database environments) After restoring table space TBS1 on database partitions 0 and 2 only, roll TBS1 forward on database partitions 0 and 2:
db2 rollforward db sample to end of logs
This fails, because TBS1 is not ready for rollforward recovery on database partition 1. Reports SQL4906N.
db2 rollforward db sample to end of logs on dbpartitionnums (0, 2) tablespace(TBS1)
This fails, because TBS1 is not ready for rollforward recovery on database partition 1; all pieces must be rolled forward together. With table space rollforward to a point in time, the database partition clause is not accepted. The rollforward operation must take place on all the database partitions on which the table space resides. After restoring TBS1 on database partition 1:
db2 rollforward db sample to 1998-04-03-14.21.56 and stop tablespace(TBS1)
This completes successfully. Example 7 (partitioned database environment) After restoring a table space on all database partitions, roll forward to point in time 2, but do not specify AND STOP. The rollforward operation is still in progress. Cancel and roll forward to point in time 1:
db2 rollforward db sample to pit2 tablespace(TBS1) db2 rollforward db sample cancel tablespace(TBS1) ** restore TBS1 on all database partitions ** db2 rollforward db sample to pit1 tablespace(TBS1) db2 rollforward db sample stop tablespace(TBS1)
698
Command Reference
ROLLFORWARD DATABASE
Rollforward recover a table space that resides on eight database partitions (3 to 10) listed in the db2nodes.cfg file:
db2 rollforward database dwtest to end of logs tablespace (tssprodt)
This operation to the end of logs (not point in time) completes successfully. The database partitions on which the table space resides do not have to be specified. The utility defaults to the db2nodes.cfg file. Example 9 (partitioned database environment) Rollforward recover six small table spaces that reside on a single-partition database partition group (on database partition 6):
db2 rollforward database dwtest to end of logs on dbpartitionnum (6) tablespace(tsstore, tssbuyer, tsstime, tsswhse, tsslscat, tssvendor)
This operation to the end of logs (not point in time) completes successfully. Usage notes: If restoring from an image that was created during an online backup operation, the specified point in time for the rollforward operation must be later than the time at which the online backup operation completed. If the rollforward operation is stopped before it passes this point, the database is left in rollforward pending state. If a table space is in the process of being rolled forward, it is left in rollforward in progress state. If one or more table spaces is being rolled forward to a point in time, the rollforward operation must continue at least to the minimum recovery time, which is the last update to the system catalogs for this table space or its tables. The minimum recovery time (in Coordinated Universal Time, or UTC) for a table space can be retrieved using the LIST TABLESPACES SHOW DETAIL command. Rolling databases forward might require a load recovery using tape devices. If prompted for another tape, you can respond with one of the following: c d t Continue. Continue using the device that generated the warning message (for example, when a new tape has been mounted) Device terminate. Stop using the device that generated the warning message (for example, when there are no more tapes) Terminate. Take all affected tablespaces offline, but continue rollforward processing.
If the rollforward utility cannot find the next log that it needs, the log name is returned in the SQLCA, and rollforward recovery stops. If no more logs are available, use the STOP option to terminate rollforward recovery. Incomplete transactions are rolled back to ensure that the database or table space is left in a consistent state. Compatibilities: For compatibility with versions earlier than Version 8: v The keyword NODE can be substituted for DBPARTITIONNUM. v The keyword NODES can be substituted for DBPARTITIONNUMS. v Point in time rollforward is not supported with pre-V9.1 clients due to V9.1 support for partitioned tables.
Chapter 3. CLP Commands
699
ROLLFORWARD DATABASE
Related concepts: v Developing a backup and recovery strategy in Data Recovery and High Availability Guide and Reference Related tasks: v Using rollforward in Data Recovery and High Availability Guide and Reference
700
Command Reference
RUNCMD
Executes a specified command from the CLP interactive mode command history. Scope This command can only be run within CLP interactive mode. Specifically, it cannot be run from the CLP command mode or the CLP batch mode. Authorization: None Required connection: The required connection will depend on the command being executed. Command syntax:
RUNCMD R
num
Command parameters: num If num is positive, executes the command corresponding to num in the command history. If num is negative, executes the command corresponding to num, counting backwards from the most recent command in the command history. Zero is not a valid value for num. If this parameter is not specified, executes the most recently run command. (This is equivalent to specifying a value of -1 for num).
Usage notes: 1. Typically, you would execute the HISTORY command to see a list of recently executed commands and then execute the RUNCMD to execute a command from this list. 2. The RUNCMD command is not recorded in the command history, but the command executed by the RUNCMD command is recorded in the command history. Related reference: v EDIT on page 432 v HISTORY on page 493
701
RUNSTATS
RUNSTATS
Updates statistics about the characteristics of a table and/or associated indexes, or statistical views. These characteristics include number of records, number of pages, and average record length. The optimizer uses these statistics when determining access paths to the data. For a table, this utility should be called when the table has had many updates, or after reorganizing the table. For a statistical view, this utility should be called when changes to underlying tables have substantially affected the rows returned by the view. The view must have been previously enabled for use in query optimization using the ALTER VIEW command. Scope: This command can be issued from any database partition in the db2nodes.cfg file. It can be used to update the catalogs on the catalog database partition. For tables, this command collects statistics for a table on the database partition from which it is invoked. If the table does not exist on that database partition, the first database partition in the database partition group is selected. For views, this command collects statistics using data from tables on all participating database partitions. Authorization: For tables, one of the following: v v v v v v sysadm sysctrl sysmaint dbadm CONTROL privilege on the table LOAD authority
You do not need any explicit privilege to use this command on any declared global temporary table that exists within its connection. For statistical views, one of the following: v sysadm v v v v sysctrl sysmaint dbadm CONTROL privilege on the statistical view
In addition, you need to have appropriate privileges to access rows from the statistical view. Specifically, for each table, statistical view or nickname referenced in the statistical view definition, the user must have one of the following privileges: v sysadm or dbadm v CONTROL v SELECT
702
Command Reference
RUNSTATS
Required connection: Database Command syntax:
RUNSTATS ON TABLE object name USE PROFILE Statistics Options
UTIL_IMPACT_PRIORITY priority
Statistics Options:
ALLOW WRITE ACCESS Table Object Options ALLOW READ ACCESS
Profile Options
REPEATABLE ( integer-literal )
Profile Options:
SET PROFILE NONE SET UPDATE PROFILE ONLY
Index Clause:
, INDEXES INDEX index name ALL
DETAILED SAMPLED
703
RUNSTATS
Column Stats Clause:
ON ON Cols Clause Distribution Clause Cols Clause
Distribution Clause:
WITH DISTRIBUTION On Dist Cols Clause
On Cols Clause:
ON ALL COLUMNS , ON ALL COLUMNS AND KEY ON KEY COLUMNS COLUMNS ( Column Option )
COLUMNS (
DEFAULT
Frequency Option:
NUM_FREQVALUES integer
Quantile Option:
NUM_QUANTILES integer
Column Option:
704
Command Reference
RUNSTATS
column name LIKE STATISTICS , ( column name )
Command parameters: object-name Identifies the table or statistical view on which statistics are to be collected. It must not be a hierarchy table. For typed tables, object-name must be the name of the root table of the table hierarchy. The fully qualified name or alias in the form: schema.object-namemust be used. The schema is the user name under which the table was created. index-name Identifies an existing index defined on the table. The fully qualified name in the form schema.index-name must be used. This option cannot be used for views. USE PROFILE This option allows RUNSTATS to employ a previously stored statistics profile to gather statistics for a table or statistical view. The statistics profile is created using the SET PROFILE options and is updated using the UPDATE PROFILE options. FOR INDEXES Collects and updates statistics for the indexes only. If no table statistics had been previously collected on the table, basic table statistics are also collected. These basic statistics do not include any distribution statistics. This option cannot be used for views. AND INDEXES Collects and updates statistics for both the table and the indexes. This option cannot be used for views. DETAILED Calculates extended index statistics. These are the CLUSTERFACTOR and PAGE_FETCH_PAIRS statistics that are gathered for relatively large indexes. This option cannot be used for views. SAMPLED This option, when used with the DETAILED option, allows RUNSTATS to employ a CPU sampling technique when compiling the extended index statistics. If the option is not specified, every entry in the index is examined to compute the extended index statistics. This option cannot be used for views. ON ALL COLUMNS Statistics collection can be done on some columns and not on others. Columns such as LONG VARCHAR or CLOB columns are ineligible. If it is desired to collect statistics on all eligible columns, one can use the ON ALL COLUMNS clause. Columns can be specified either for basic statistics collection (on-cols-clause) or in conjunction with the WITH DISTRIBUTION clause (on-dist-cols-clause). The ON ALL COLUMNS specification is the default option if neither of the column specific clauses are specified. If it is specified in the on-cols-clause, all columns will have only basic column statistics collected unless specific columns are chosen as part of the
705
RUNSTATS
WITH DISTRIBUTION clause. Those columns specified as part of the WITH DISTRIBUTION clause will also have basic and distribution statistics collected. If the WITH DISTRIBUTION ON ALL COLUMNS is specified both basic statistics and distribution statistics are collected for all eligible columns. Anything specified in the on-cols-clause is redundant and therefore not necessary. ON COLUMNS This clause allows the user to specify a list of columns for which to collect statistics. If you specify group of columns, the number of distinct values for the group will be collected. When you run RUNSTATS on a table without gathering index statistics, and specify a subset of columns for which statistics are to be gathered, then: 1. Statistics for columns not specified in the RUNSTATS command but which are the first column in an index are NOT reset. 2. Statistics for all other columns not specified in the RUNSTATS command are reset. This clause can be used in the on-cols-clause and the on-dist-cols-clause. Collecting distribution statistics for a group of columns is not currently supported. If XML type columns are specified in a column group, the XML type columns will be ignored for the purpose of collecting distinct values for the group. However, basic XML column statistics will be collected for the XML type columns in the column group. EXCLUDING XML COLUMNS This clause allows you to omit all XML type columns from statistics collection. This clause facilitates the collection of statistics on non-XML columns because the inclusion of XML data can require greater system resources. The EXCLUDING XML COLUMNS clause takes precedence over other clauses that specify XML columns for statistics collection. For example, if you use the EXCLUDING XML COLUMNS clause, and you also specify XML type columns with the ON COLUMNS clause or you use the ON ALL COLUMNS clause, all XML type columns will be ignored during statistics collection. ON KEY COLUMNS Instead of listing specific columns, you can choose to collect statistics on columns that make up all the indexes defined on the table. It is assumed here that critical columns in queries are also those used to create indexes on the table. If there are no indexes on the table, it is as good as an empty list and no column statistics will be collected. It can be used in the on-cols-clause or the on-dist-cols-clause. It is redundant in the on-cols-clause if specified in both clauses since the WITH DISTRIBUTION clause is used to specify collection of both basic and distribution statistics. XML type columns are by definition not a key column and will not be included for statistics collection by the ON KEY COLUMNS clause. This option cannot be used for views. column-name Name of a column in the table or statistical view. If you specify the name of an ineligible column for statistics collection, such as a non-existent column or a mistyped column name, error (-205) is returned. Two lists of columns can be specified, one without distribution and one with distribution. If the column is specified in the list that is not associated with the WITH DISTRIBUTION clause only basic column statistics will be collected.
706
Command Reference
RUNSTATS
If the column appears in both lists, distribution statistics will be collected (unless NUM_FREQVALUES and NUM_QUANTILES are set to zero). NUM_FREQVALUES Defines the maximum number of frequency values to collect. It can be specified for an individual column in the ON COLUMNS clause. If the value is not specified for an individual column, the frequency limit value will be picked up from that specified in the DEFAULT clause. If it is not specified there either, the maximum number of frequency values to be collected will be what is set in the NUM_FREQVALUES database configuration parameter. NUM_QUANTILES Defines the maximum number of distribution quantile values to collect. It can be specified for an individual column in the ON COLUMNS clause. If the value is not specified for an individual column, the quantile limit value will be picked up from that specified in the DEFAULT clause. If it is not specified there either, the maximum number of quantile values to be collected will be what is set in the NUM_QUANTILES database configuration parameter. WITH DISTRIBUTION This clause specifies that both basic statistics and distribution statistics are to be collected on the columns. If the ON COLUMNS clause is not specified, distribution statistics are collected on all the columns of the table or statistical view (excluding columns that are ineligible such as CLOB and LONG VARCHAR). If the ON COLUMNS clause is specified, distribution statistics are collected only on the column list provided (excluding those ineligible for statistics collection). If the clause is not specified, only basic statistics are collected. Collection of distribution statistics on column groups is currently not supported; distribution statistics will not be collected when column groups are specified in the WITH DISTRIBUTION ON COLUMNS clause. DEFAULT If NUM_FREQVALUES or NUM_QUANTILES are specified, these values will be used to determine the maximum number of frequency and quantile statistics to be collected for the columns, if these are not specified for individual columns in the ON COLUMNS clause. If the DEFAULT clause is not specified, the values used will be those in the corresponding database configuration parameters. LIKE STATISTICS When this option is specified additional column statistics are collected. These statistics are the SUB_COUNT and the SUB_DELIM_LENGTH statistics in SYSSTAT.COLUMNS. They are collected for string columns only and they are used by the query optimizer to improve the selectivity estimates for predicates of the type "column LIKE %xyz" and "column LIKE %xyz%" ALLOW WRITE ACCESS Specifies that other users can read from and write to the table(s) while statistics are calculated. For statistical views, these are the base tables referenced in the view definition. The ALLOW WRITE ACCESS option is not recommended for tables that will have a lot of inserts, updates or deletes occurring concurrently. The RUNSTATS command first performs table statistics and then performs index statistics. Changes in the tables state between the time that the table and index statistics are collected might result in inconsistencies. Although having up-to-date statistics is important for the optimization of queries, it
Chapter 3. CLP Commands
707
RUNSTATS
is also important to have consistent statistics. Therefore, statistics should be collected at a time when inserts, updates or deletes are at a minimum. ALLOW READ ACCESS Specifies that other users can have read-only access to the table(s) while statistics are calculated. For statistical views, these are the base tables referenced in the view definition. TABLESAMPLE BERNOULLI This option allows RUNSTATS to collect statistics on a sample of the rows from the table or statistical view. Bernoulli sampling considers each row individually, including that row with probability P/100 (where P is the value of numeric-literal) and excluding it with probability 1-P/100. Thus, if the numeric-literal were evaluated to be the value 10, representing a 10 percent sample, each row would be included with probability 0.1 and be excluded with probability 0.9. Unless the optional REPEATABLE clause is specified, each execution of RUNSTATS will usually yield a different such sample of the table. All data pages will be retrieved through a table scan but only the percentage of rows as specified through the numeric-literal parameter will be used for the statistics collection. TABLESAMPLE SYSTEM This option allows RUNSTATS to collect statistics on a sample of the data pages from the table(s). System sampling considers each page individually, including that page with probability P/100 (where P is the value of numeric-literal) and excluding it with probability 1-P/100. Unless the optional REPEATABLE clause is specified, each execution of RUNSTATS will usually yield a different such sample of the table. The size of the sample is controlled by the numeric-literal parameter in parentheses, representing an approximate percentage P of the table to be returned. Only a percentage of the data pages as specified through the numeric-literal parameter will be retrieved and used for the statistics collection. On statistical views, system sampling is restricted to a specific class of views. These are views that either access a single base table or nickname, or that access multiple bases tables that are joined via referential-integrity relationships. In either case, there must not be any local predicates in the view definition. If system sampling is specified on a view that cannot support such sampling, an SQL20288N error is raised. REPEATABLE (integer-literal) Adding the REPEATABLE clause to the TABLESAMPLE clause ensures that repeated executions of RUNSTATS return the same sample. The integer-literal parameter is a non-negative integer representing the seed to be used in sampling. Passing a negative seed will result in an error (SQL1197N). The sample set might still vary between repeatable RUNSTATS invocations if activity against the table or statistical view resulted in changes to the table or statistical view data since the last time TABLESAMPLE REPEATABLE was run. Also, the method by which the sample was obtained as specified by the bernoulli or system keyword, must also be the same to ensure consistent results. numeric-literal The numeric-literal parameter specifies the size of the sample to be obtained, as a percentage P. This value must be a positive number that is less than or equal to 100, and can be between 1 and 0. For example, a value of 0.01 represents one one-hundredth of a percent, such that 1 row in 10,000 would be sampled, on average. A value of 0 or 100 will be treated by the DB2 database system as if sampling was not specified, regardless of
708
Command Reference
RUNSTATS
whether TABLESAMPLE BERNOULLI or TABLESAMPLE SYSTEM is specified. A value greater than 100 or less than 0 will be treated by DB2 as an error (SQL1197N). SET PROFILE NONE Specifies that no statistics profile will be set for this RUNSTATS invocation. SET PROFILE Allows RUNSTATS to generate and store a specific statistics profile in the system catalog tables and executes the RUNSTATS command options to gather statistics. SET PROFILE ONLY Allows RUNSTATS to generate and store a specific statistics profile in the system catalog tables without running the RUNSTATS command options. UPDATE PROFILE Allows RUNSTATS to modify an existing statistics profile in the system catalog tables, and runs the RUNSTATS command options of the updated statistics profile to gather statistics. UPDATE PROFILE ONLY Allows RUNSTATS to modify an existing statistics profile in the system catalog tables without running the RUNSTATS command options of the updated statistics profile. UTIL_IMPACT_PRIORITY priority Specifies that RUNSTATS will be throttled at the level specified by priority. priority is a number in the range of 1 to 100, with 100 representing the highest priority and 1 representing the lowest. The priority specifies the amount of throttling to which the utility is subjected. All utilities at the same priority undergo the same amount of throttling, and utilities at lower priorities are throttled more than those at higher priorities. If priority is not specified, the RUNSTATS will have the default priority of 50. Omitting the UTIL_IMPACT_PRIORITY keyword will invoke the RUNSTATS utility without throttling support. If the UTIL_IMPACT_PRIORITY keyword is specified, but the util_impact_lim configuration parameter is set to 100, then the utility will run unthrottled. This option cannot be used for views. In a partitioned database, when used on tables, the RUNSTATS command collects the statistics on only a single database partition. If the database partition from which the RUNSTATS command is executed has a partition of the table, then the command executes on that database partition. Otherwise, the command executes on the first database partition in the database partition group across which the table is partitioned. Usage Notes: 1. When there are detached partitions on a partitioned table, index keys that still belong to detached data partitions which require cleanup will not be counted as part of the keys in the statistics. These keys are not counted because they are invisible and no longer part of the table. They will eventually get removed from the index by asynchronous index cleanup. As a result, statistics collected before asynchronous index cleanup is run will be misleading. If the RUNSTATS command is issued before asynchronous index cleanup completes, it will likely generate a false alarm for index reorganization or index cleanup based on the inaccurate statistics. Once asynchronous index
709
RUNSTATS
cleanup is run, all the index keys that still belong to detached data partitions which require cleanup will be removed and this may eliminate the need for index reorganization. For partitioned tables, you are encouraged to issue the RUNSTATS command after an asynchronous index cleanup has completed in order to generate accurate index statistics in the presence of detached data partitions. To determine whether or not there are detached data partitions in the table, you can check the status field in the SYSDATAPARTITIONS table and look for the value I (index cleanup) or D (detached with dependant MQT). 2. It is recommended to run the RUNSTATS command: v On tables that have been modified considerably (for example, if a large number of updates have been made, or if a significant amount of data has been inserted or deleted or if LOAD has been done without the statistics option during LOAD). v On tables that have been reorganized (using REORG, REDISTRIBUTE DATABASE PARTITION GROUP). v On tables which have been row compressed. v When a new index has been created. v Before binding applications whose performance is critical. v When the prefetch quantity is changed. v On statistical views whose underlying tables have been modified substantially so as to change the rows that are returned by the view. v After LOAD has been executed with the STATISTICS option, use the RUNSTATS utility to collect statistics on XML columns. Statistics for XML columns are never collected during LOAD, even when LOAD is executed with the STATISTICS option. When RUNSTATS is used to collect statistics for XML columns only, existing statistics for non-XML columns that have been collected by LOAD or a previous execution of the RUNSTATS utility are retained. In the case where statistics on some XML columns have been collected previously, the previously collected statistics for an XML column will either be dropped if no statistics on that XML column are collected by the current command, or be replaced if statistics on that XML column are collected by the current command. 3. The options chosen must depend on the specific table and the application. In general: v If the table is a very critical table in critical queries, is relatively small, or does not change too much and there is not too much activity on the system itself, it might be worth spending the effort on collecting statistics in as much detail as possible. v If the time to collect statistics is limited, if the table is relatively large, or if the table is updated frequently, it might be beneficial to execute RUNSTATS limited to the set of columns that are used in predicates. This way, you will be able to execute the RUNSTATS command more often. v If time to collect statistics is very limited and the effort to tailor the RUNSTATS command on a table by table basis is a major issue, consider collecting statistics for the KEY columns only. It is assumed that the index contains the set of columns that are critical to the table and are most likely to appear in predicates. v If time to collect statistics is very limited and table statistics are to be gathered, consider using the TABLESAMPLE option to collect statistics on a subset of the table data.
710
Command Reference
RUNSTATS
v If there are many indexes on the table and DETAILED (extended) information on the indexes might improve access plans, consider the SAMPLED option to reduce the time it takes to collect statistics. Regardless of whether you use the SAMPLED option, collecting detailed statistics on indexes is time consuming. Do not collect these statistics unless you are sure that they will be useful for your queries. v If there is skew in certain columns and predicates of the type "column = constant", it might be beneficial to specify a larger NUM_FREQVALUES value for that column v Collect distribution statistics for all columns that are used in equality predicates and for which the distribution of values might be skewed. v For columns that have range predicates (for example "column >= constant", "column BETWEEN constant1 AND constant2") or of the type "column LIKE %xyz", it might be beneficial to specify a larger NUM_QUANTILES value. v If storage space is a concern and one cannot afford too much time on collecting statistics, do not specify high NUM_FREQVALUES or NUM_QUANTILES values for columns that are not used in predicates. v If index statistics are requested, and statistics have never been run on the table containing the index, statistics on both the table and indexes are calculated. v If statistics for XML columns in the table are not required, the EXCLUDING XML COLUMNS option can be used to exclude all XML columns. This option takes precedence over all other clauses that specify XML columns for statistics collection. 4. After the command is run note the following: v A COMMIT should be issued to release the locks. v To allow new access plans to be generated, the packages that reference the target table must be rebound. v Executing the command on portions of the table could result in inconsistencies as a result of activity on the table since the command was last issued. In this case a warning message is returned. Issuing RUNSTATS on the table only might make table and index level statistics inconsistent. For example, you might collect index level statistics on a table and later delete a significant number of rows from the table. If you then issue RUNSTATS on the table only, the table cardinality might be less than FIRSTKEYCARD, which is an inconsistency. In the same way, if you collect statistics on a new index when you create it, the table level statistics might be inconsistent. 5. The RUNSTATS command will drop previously collected distribution statistics if table statistics are requested. For example, RUNSTATS ON TABLE, or RUNSTATS ON TABLE ... AND INDEXES ALL will cause previously collected distribution statistics to be dropped. If the command is run on indexes only then previously collected distribution statistics are retained. For example, RUNSTATS ON TABLE ... FOR INDEXES ALL will cause the previously collected distribution statistics to be retained. If the RUNSTATS command is run on XML columns only, then previously collected basic column statistics and distribution statistics are retained. In the case where statistics on some XML columns have been collected previously, the previously collected statistics for an XML column will either be dropped if no statistics on that XML column are collected by the current command, or be replaced if statistics on that XML column are collected by the current command.
711
RUNSTATS
6. For range-clustered tables, there is a special system-generated index in the catalog tables which represents the range ordering property of range-clustered tables. When statistics are collected on this type of table, if the table is to be included as part of the statistics collection, statistics will also be collected for the system-generated index. The statistics reflect the fast access of the range lookups by representing the index as a two-level index with as many pages as the base data table, and having the base data clustered perfectly along the index order. 7. In the on-dist-cols clause of the command syntax, the Frequency Option and Quantile Option parameters are currently not supported for Column GROUPS. These options are supported for single columns. 8. There are three prefetch statistics that cannot be computed when working in DMS mode. When looking at the index statistics in the index catalogs, you will see a -1 value for the following statistics: v AVERAGE_SEQUENCE_FETCH_PAGES v AVERAGE_SEQUENCE_FETCH_GAP v AVERAGE_RANDOM_FETCH_PAGES 9. Runstats sampling through TABLESAMPLE only occurs with table data pages and not index pages. When index statistics as well as sampling is requested, all the index pages are scanned for statistics collection. It is only in the collection of table statistics where TABLESAMPLE is applicable. However, a more efficient collection of detailed index statistics is available through the SAMPLED DETAILED option. This is a different method of sampling than that employed by TABLESAMPLE and only applies to the detailed set of index statistics. 10. A statistics profile can be set or updated for the table or statistical view specified in the RUNSTATS command, by using the set profile or update profile options. The statistics profile is stored in a visible string format, which represents the RUNSTATS command, in the STATISTICS_PROFILE column of the SYSIBM.SYSTABLES system catalog table. 11. Statistics collection on XML type columns is governed by two DB2 database system registry values: DB2_XML_RUNSTATS_PATHID_K and DB2_XML_RUNSTATS_PATHVALUE_K. These two parameters are similar to the NUM_FREQVALUES parameter in that they specify the number of frequency values to collect. If not set, a default of 200 will be used for both parameters. 12. RUNSTATS acquires an IX table lock on SYSTABLES and a U lock on the row for the table on which stats are being gathered at the beginning of RUNSTATS. Operations can still read from SYSTABLES including the row with the U lock. Write operations are also possible, providing they do not occur against the row with the U lock. However, another reader or writer will not be able acquire an S lock on SYSTABLES because of RUNSTATS IX lock. Examples: 1. Collect statistics on the table only, on all columns without distribution statistics:
RUNSTATS ON TABLE db2user.employee
2. Collect statistics on the table only, on columns empid and empname with distribution statistics:
RUNSTATS ON TABLE db2user.employee WITH DISTRIBUTION ON COLUMNS (empid, empname)
3. Collect statistics on the table only, on all columns with distribution statistics using a specified number of frequency limit for the table while picking the NUM_QUANTILES from the configuration setting:
712
Command Reference
RUNSTATS
RUNSTATS ON TABLE db2user.employee WITH DISTRIBUTION DEFAULT NUM_FREQVALUES 50
6. Collect basic statistics on the table and all indexes using sampling for the detailed index statistics collection:
RUNSTATS ON TABLE db2user.employee AND SAMPLED DETAILED INDEXES ALL
7. Collect statistics on table, with distribution statistics on columns empid, empname and empdept and the two indexes Xempid and Xempname. Distribution statistics limits are set individually for empdept, while the other two columns use a common default:
RUNSTATS ON TABLE db2user.employee WITH DISTRIBUTION ON COLUMNS (empid, empname, empdept NUM_FREQVALUES 50 NUM_QUANTILES 100) DEFAULT NUM_FREQVALUES 5 NUM_QUANTILES 10 AND INDEXES db2user.Xempid, db2user.Xempname
9. Collect statistics on all indexes and all columns without distribution except for one column. Consider T1 containing columns c1, c2, ...., c8
RUNSTATS ON TABLE db2user.T1 WITH DISTRIBUTION ON COLUMNS (c1, c2, c3 NUM_FREQVALUES 20 NUM_QUANTILES 40, c4, c5, c6, c7, c8) DEFAULT NUM_FREQVALUES 0, NUM_QUANTILES 0 AND INDEXES ALL RUNSTATS ON TABLE db2user.T1 WITH DISTRIBUTION ON COLUMNS (c3 NUM_FREQVALUES 20 NUM_QUANTILES 40) AND INDEXES ALL
10. Collect statistics on table T1 for the individual columns c1 and c5 as well as on the column combinations (c2, c3) and (c2, c4). Multi-column cardinality is very useful to the query optimizer when it estimates filter factors for predicates on columns in which the data is correlated.
RUNSTATS ON TABLE db2user.T1 ON COLUMNS (c1, (c2, c3), (c2, c4), c5)
11. Collect statistics on table T1 for the individual columns c1 and c2. For column c1 also collect the LIKE predicate statistics.
RUNSTATS ON TABLE db2user.T1 ON COLUMNS (c1 LIKE STATISTICS, c2)
12. Register a statistics profile to collect statistics on the table only, on all columns with distribution statistics using a specified number of frequency limit for the table while picking the NUM_QUANTILES from the configuration setting. The command also updates the statistics as specified.
RUNSTATS ON TABLE db2user.employee WITH DISTRIBUTION DEFAULT NUM_FREQVALUES 50 SET PROFILE
13. Register a statistics profile to collect statistics on the table only, on all columns with distribution statistics using a specified number of frequency limit for the table while picking the NUM_QUANTILES from the configuration setting. Statistics are not collected.
RUNSTATS ON TABLE db2user.employee WITH DISTRIBUTION DEFAULT NUM_FREQVALUES 50 SET PROFILE ONLY
713
RUNSTATS
14. Modify the previously registered statistics profile by changing the NUM_FREQVALUES value from 50 to 30. The command also updates the statistics as specified.
RUNSTATS ON TABLE db2user.employee WITH DISTRIBUTION DEFAULT NUM_FREQVALUES 30 UPDATE PROFILE
15. Modify the previously registered statistics profile by changing the NUM_FREQVALUES value from 50 to 30. Statistics are not collected.
RUNSTATS ON TABLE db2user.employee WITH DISTRIBUTION DEFAULT NUM_FREQVALUES 30 UPDATE PROFILE ONLY
16. Modify the previously registered statistics profile by adding column empl_address and column group (empl_title, empl_salary) options. The command also updates the statistics as specified.
RUNSTATS ON TABLE db2user.employee ON COLUMNS (empl_address, (empl_title, empl_salary)) UPDATE PROFILE
17. Modify the previously registered statistics profile by adding column empl_address and column group (empl_title, empl_salary) options. Statistics are not collected.
RUNSTATS ON TABLE db2user.employee ON COLUMNS (empl_address, (empl_title, empl_salary)) UPDATE PROFILE ONLY
18. Collect statistics on a table using the options recorded in the statistics profile for that table:
RUNSTATS ON TABLE db2user.employee USE PROFILE
19. Query the RUNSTATS command options corresponding to the previously registered statistics profile stored in the catalogs of the table:
SELECT STATISTICS_PROFILE FROM SYSIBM.SYSTABLES WHERE NAME = EMPLOYEE
21. To control the sample set on which statistics will be collected and to be able to repeatedly use the same sample set, you can do so as follows:
RUNSTATS ON TABLE db2user.employee WITH DISTRIBUTION TABLESAMPLE BERNOULLI(30) REPEATABLE(4196)
Issuing the same statement as above will result in the same set of statistics as long as the data has not changed in the interm. 22. Collect index statistics as well as table statistics on 1.5 percent of the data pages. Only table data pages and not index pages are sampled. In this example 1.5 percent of table data pages are used for the collection of table statistics, while for index statistics all the index pages will be used:
RUNSTATS ON TABLE db2user.employee AND INDEXES ALL TABLESAMPLE SYSTEM(1.5)
23. Collect statistics for a statistical view, on all columns, without distribution statistics:
RUNSTATS ON TABLE salesdb.product_sales_view
24. Collect statistics for a statistical view, with distribution statistics on the columns category, type and product_key. Distribution statistics limits are set for the category column, while the other columns use a common default:
RUNSTATS ON TABLE salesdb.product_sales_view WITH DISTRIBUTION ON COLUMNS (category NUM_FREQVALUES 100 NUM_QUANTILES 100, type, product_key) DEFAULT NUM_FREQVALUES 50 NUM_QUANTILES 50
714
Command Reference
RUNSTATS
25. Collect statistics, including distribution statistics, on 10 percent of the rows using row level sampling:
RUNSTATS ON TABLE db2user.daily_sales WITH DISTRIBUTION TABLESAMPLE BERNOULLI (10)
26. Collect statistics, including distribution statistics, on 2.5 percent of the rows using data page level sampling. Additionally, specify the repeated use of the same sample set. For this command to succeed, the query must be such that the DB2 database system can successfully push data page sampling down to one or more tables. Otherwise, an error (SQL 20288N) is raised.
RUNSTATS ON TABLE db2user.daily_sales WITH DISTRIBUTION TABLESAMPLE SYSTEM (2.5)
27. Register a statistics profile to collect statistics on the view and on all columns with distribution statistics as specified:
RUNSTATS ON TABLE salesdb.product_sales_view WITH DISTRIBUTION DEFAULT NUM_FREQVALUES 50 NUM_QUANTILES 50 SET PROFILE
28. Modify the previously registered statistics profile. This command also updates the statistics as specified:
RUNSTATS ON TABLE salesdb.product_sales_view WITH DISTRIBUTION DEFAULT NUM_FREQVALUES 25 NUM_QUANTILES 25 UPDATE PROFILE
Related concepts: v Automatic statistics collection in Performance Guide v Collecting statistics on a sample of the table data in Performance Guide v Collecting statistics using a statistics profile in Performance Guide Related tasks: v Collecting catalog statistics in Performance Guide Related reference: v ADMIN_COPY_SCHEMA procedure Copy a specific schema and its objects in Administrative SQL Routines and Views v db2Runstats API - Update statistics for tables and indexes in Administrative API Reference v RUNSTATS command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
715
SET CLIENT
SET CLIENT
Specifies connection settings for the back-end process. Authorization: None Required connection: None Command syntax:
SET CLIENT CONNECT 1 2 DISCONNECT EXPLICIT CONDITIONAL AUTOMATIC
MAX_NETBIOS_CONNECTIONS value
SQLRULES
DB2 STD
SYNCPOINT
CONNECT_DBPARTITIONNUM
db-partition-number CATALOG_DBPARTITIONNUM
ATTACH_DBPARTITIONNUM db-partition-number
Command parameters: CONNECT 1 2 DISCONNECT EXPLICIT Specifies that only database connections that have been explicitly marked for release by the RELEASE statement are to be disconnected at commit. CONDITIONAL Specifies that the database connections that have been marked RELEASE or have no open WITH HOLD cursors are to be disconnected at commit. AUTOMATIC Specifies that all database connections are to be disconnected at commit. Specifies that a CONNECT statement is to be processed as a type 1 CONNECT. Specifies that a CONNECT statement is to be processed as a type 2 CONNECT.
716
Command Reference
SET CLIENT
MAX_NETBIOS_CONNECTIONS value Specifies the maximum number of concurrent connections that can be made in an application using a NetBIOS adapter. Maximum value is 254. This parameter must be set before the first NetBIOS connection is made. Changes subsequent to the first connection are ignored. SQLRULES DB2 STD Specifies that a type 2 CONNECT is to be processed according to the DB2 rules. Specifies that a type 2 CONNECT is to be processed according to the Standard (STD) rules based on ISO/ANS SQL92.
SYNCPOINT Specifies how commits or rollbacks are to be coordinated among multiple database connections. This command parameter is ignored and is only included here for backward compatibility. ONEPHASE Specifies that no transaction manager (TM) is to be used to perform a two-phase commit. A one-phase commit is to be used to commit the work done by each database in multiple database transactions. TWOPHASE Specifies that the TM is required to coordinate two-phase commits among those databases that support this protocol. NONE Specifies that no TM is to be used to perform a two-phase commit, and does not enforce single updater, multiple reader. A COMMIT is sent to each participating database. The application is responsible for recovery if any of the commits fail. CONNECT_DBPARTITIONNUM (partitioned database environment only) db-partition-number Specifies the database partition to which a connect is to be made. A value between zero and 999, inclusive. Overrides the value of the environment variable DB2NODE. CATALOG_DBPARTITIONNUM Specifying this value permits the client to connect to the catalog database partition of the database without knowing the identity of that database partition in advance. ATTACH_DBPARTITIONNUM db-partition-number (partitioned database environment only) Specifies the database partition to which an attach is to be made. A value between zero and 999, inclusive. Overrides the value of the environment variable DB2NODE. For example, if database partitions 1, 2, and 3 are defined, the client only needs to be able to access one of these database partitions. If only database partition 1 containing databases has been cataloged, and this parameter is set to 3, then the next attach attempt will result in an attachment at database partition 3, after an initial attachment at database partition 1. Examples: To set specific values:
Chapter 3. CLP Commands
717
SET CLIENT
db2 set client connect 2 disconnect automatic sqlrules std syncpoint twophase
The connection settings revert to default values after the TERMINATE command is issued. Usage notes: SET CLIENT cannot be issued if one or more connections are active. If SET CLIENT is successful, the connections in the subsequent units of work will use the connection settings specified. If SET CLIENT is unsuccessful, the connection settings of the back-end process are unchanged. Compatibilities: For compatibility with versions earlier than Version 8: v The keyword CONNECT_NODE can be substituted for CONNECT_DBPARTITIONNUM. v The keyword CATALOG_NODE can be substituted for CATALOG_DBPARTITIONNUM. v The keyword ATTACH_NODE can be substituted for ATTACH_DBPARTITIONNUM. Related reference: v sqlesetc API - Set client connection settings in Administrative API Reference v TERMINATE on page 744 v QUERY CLIENT on page 611
718
Command Reference
Command parameters: FOR ALL The specified degree will apply to all applications.
application-handle Specifies the agent to which the new degree applies. List the values using the LIST APPLICATIONS command. TO degree The maximum run time degree of intra-partition parallelism. Examples: The following example sets the maximum run time degree of parallelism for two users, with application-handle values of 41408 and 55458, to 4:
db2 SET RUNTIME DEGREE FOR ( 41408, 55458 ) TO 4
Usage notes: This command provides a mechanism to modify the maximum degree of parallelism for active applications. It can be used to override the value that was determined at SQL statement compilation time. The run time degree of intra-partition parallelism specifies the maximum number of parallel operations that will be used when the statement is executed. The degree of intra-partition parallelism for an SQL statement can be specified at statement
Chapter 3. CLP Commands
719
720
Command Reference
container-string
number-of-pages
Command parameters: FOR tablespace-id An integer that uniquely represents a table space used by the database being restored. REPLAY ROLLFORWARD CONTAINER OPERATIONS Specifies that any ALTER TABLESPACE operation issued against this table space since the database was backed up is to be redone during a subsequent roll forward of the database. IGNORE ROLLFORWARD CONTAINER OPERATIONS Specifies that ALTER TABLESPACE operations in the log are to be ignored when performing a roll forward. USING PATH container-string For an SMS table space, identifies one or more containers that will belong to the table space and into which the table space data will be stored. It is
721
722
Command Reference
Command parameters: ON device Specifies a valid tape device name. The default value is \\.\TAPE0. TO position Specifies the mark at which the tape is to be positioned. DB2 for Windows writes a tape mark after every backup image. A value of 1 specifies the first position, 2 specifies the second position, and so on. If the tape is positioned at tape mark 1, for example, archive 2 is positioned to be restored. Related reference: v ADMIN_CMD procedure Run administrative commands in Administrative SQL Routines and Views v SET TAPE POSITION command using the ADMIN_CMD procedure in Administrative SQL Routines and Views v INITIALIZE TAPE on page 511 v REWIND TAPE on page 690
723
SET UTIL_IMPACT_PRIORITY
SET UTIL_IMPACT_PRIORITY
Changes the impact setting for a running utility. Using this command, you can: v throttle a utility that was invoked in unthrottled mode v unthrottle a throttled utility (disable throttling) v reprioritize a throttled utility (useful if running multiple simultaneous throttled utilities) Scope: Authorization: One of the following: v sysadm v sysctrl v sysmaint Required connection: Instance. Command syntax:
SET UTIL_IMPACT_PRIORITY FOR utility-id TO priority
Command parameters: utility-id ID of the utility whose impact setting will be updated. IDs of running utilities can be obtained with the LIST UTILITIES command. TO priority Specifies an instance-level limit on the impact associated with running a utility. A value of 100 represents the highest priority and 1 represents the lowest priority. Setting priority to 0 will force a throttled utility to continue unthrottled. Setting priority to a non-zero value will force an unthrottled utility to continue in throttled mode. Examples: The following example unthrottles the utility with ID 2.
SET UTIL_IMPACT_PRIORITY FOR 2 TO 0
The following example throttles the utility with ID 3 to priority 10. If the priority was 0 before the change then a previously unthrottled utility is now throttled. If the utility was previously throttled (priority had been set to a value greater than zero), then the utility has been reprioritized.
SET UTIL_IMPACT_PRIORITY FOR 3 TO 10
Usage notes: Throttling requires having an impact policy defined by setting the util_impact_lim configuration parameter.
724
Command Reference
SET UTIL_IMPACT_PRIORITY
Related reference: v LIST UTILITIES on page 555 v util_impact_lim - Instance impact policy configuration parameter in Performance Guide
725
SET WRITE
SET WRITE
The SET WRITE command allows a user to suspend I/O writes or to resume I/O writes for a database. Typical use of this command is for splitting a mirrored database. This type of mirroring is achieved through a disk storage system. This new state, SUSPEND_WRITE, is visible from the Snapshot Monitor. All table spaces must be in a NORMAL state for the command to execute successfully. If any one table space is in a state other than NORMAL, the command will fail. Scope: This command only affects the database partition on which it is executed. Authorization: This command only affect the node on which it is executed. The authorization of this command requires the issuer to have one of the following privileges: v sysadm v sysctrl v sysmaint Required Connection: Database Command Syntax:
SET WRITE SUSPEND RESUME FOR DATABASE DB
Command Parameters: SUSPEND Suspending I/O writes will put all table spaces into a new state SUSPEND_WRITE state. Writes to the logs are also suspended by this command. All database operations, apart from online backup and restore, should function normally while database writes are suspended. However, some operations can wait while attempting to flush dirty pages from the buffer pool or log buffers to the logs. These operations will resume normally once the database writes are resumed. RESUME Resuming I/O writes will remove the SUSPEND_WRITE state from all of the table spaces and make the table spaces available for update. Usage Notes: It is suggested that I/O writes be resumed from the same connection from which they were suspended. Ensuring that this connection is available to resume I/O writes involves not performing any operations from this connection until database writes are resumed. Otherwise, some operations can wait for I/O writes to be resumed if dirty pages must be flushed from the buffer pool or from log buffers to the logs. Furthermore, subsequent connection attempts might hang if they require flushing dirty pages from the buffer pool to disk. Subsequent connections will complete successfully once database I/O resumes. If your connection attempts are
726
Command Reference
SET WRITE
hanging, and it has become impossible to resume I/O from the connection that you used to suspend I/O, then you will have to run the RESTART DATABASE command with the WRITE RESUME option. When used in this circumstance, the RESTART DATABASE command will resume I/O writes without performing crash recovery. The RESTART DATABASE command with the WRITE RESUME option will only perform crash recovery when you use it after a database crash. Related concepts: v High availability through online split mirror and suspended I/O support in Data Recovery and High Availability Guide and Reference Related tasks: v Using a split mirror as a backup image in Data Recovery and High Availability Guide and Reference v Using a split mirror as a standby database in Data Recovery and High Availability Guide and Reference v Using a split mirror to clone a database in Data Recovery and High Availability Guide and Reference Related reference: v db2SetWriteForDB API - Suspend or resume I/O writes for database in Administrative API Reference
727
db2start
REMOTE INSTANCE
instancename
remote options
PROFILE profile
DBPARTITIONNUM db-partition-number
start options
728
Command Reference
start options:
USER username
PASSWORD password
NETNAME netname
db-partition-number
restart options:
HOSTNAME hostname
PORT logical-port
COMPUTER computername
USER username
PASSWORD password
NETNAME netname
Command parameters: REMOTE [INSTANCE] instancename Specifies the name of the remote instance you wish to start. ADMINNODE nodename With REMOTE, or REMOTE INSTANCE, specifies the name of the administration node. HOSTNAME hostname With REMOTE, or REMOTE INSTANCE, specifies the name of the host node. USER username With REMOTE, or REMOTE INSTANCE, specifies the name of the user. USING password With REMOTE, or REMOTE INSTANCE, and the USER, specifies the password of the user. ADMIN MODE Starts the instance in quiesced mode for administration purposes. This is equivalent to the QUIESCE INSTANCE command except in this case the instance is not already up, and therefore there is no need to force the connections off.
Chapter 3. CLP Commands
729
730
Command Reference
731
Usage notes: It is not necessary to issue this command on a client node. It is provided for compatibility with older clients, but it has no effect on the database manager. Once started, the database manager instance runs until the user stops it, even if all application programs that were using it have ended. If the database manager starts successfully, a successful completion message is sent to the standard output device. If an error occurs, processing stops, and an error message is sent to the standard output device. In a partitioned database environment, messages are returned on the database partition that issued the START DATABASE MANAGER command. If no parameters are specified in a partitioned database environment, the database manager is started on all parallel nodes using the parameters specified in the database partition configuration file. If a START DATABASE MANAGER command is in progress, ensure that the applicable database partitions have started before issuing a request to the database. The db2cshrc file is not supported and cannot be used to define the environment.
732
Command Reference
or
db2start admin mode user username
or
db2start admin mode group groupname
When adding a new database partition, START DATABASE MANAGER must determine whether or not each database in the instance is enabled for automatic storage. This is done by communicating with the catalog partition for each database. If automatic storage is enabled then the storage path definitions are retrieved as part of that communication. Likewise, if system temporary table spaces are to be created with the database partitions, START DATABASE MANAGER might have to communicate with another database partition server to retrieve the table space definitions for the database partitions that reside on that server. The start_stop_time database manager configuration parameter is used to specify the time, in minutes, by which the other database partition server must respond with the automatic storage and table space definitions. If this time is exceeded, the command fails. If this situation occurs, increase the value of start_stop_time, and reissue the command. On UNIX platforms, the START DATABASE MANAGER command supports the SIGINT signal. It is issued if CTRL+C is pressed. If this signal occurs, all in-progress startups are interrupted and a message (SQL1044N) is returned from each interrupted database partition to the $HOME/sqllib/log/db2start. timestamp.log error log file. Database partitions that are already started are not affected. If CTRL+C is pressed on a database partition that is starting, db2stop must be issued on that database partition before an attempt is made to start it again. On Windows operating systems, neither the db2start command nor the NET START command returns warnings if any communication subsystem failed to start. The database manager in a Windows environment is implemented as a service, and does not return an error if the service is started successfully. Be sure to examine the Event Log or the DB2DIAG.LOG file for any errors that might have occurred during the running of db2start. Compatibilities: For compatibility with versions earlier than Version 8: v The keywords LIKE NODE can be substituted for LIKE DBPARTITIONNUM. v The keyword ADDNODE can be substituted for ADD DBPARTITIONNUM. v The keyword NODENUM can be substituted for DBPARTITIONNUM. Related reference: v STOP DATABASE MANAGER on page 736 v ADD DBPARTITIONNUM on page 336 v db2InstanceStart API - Start instance in Administrative API Reference
733
START HADR
START HADR
Starts HADR operations for a database. Authorization: One of the following: v sysadm v sysctrl v sysmaint Required connection: Instance. The command establishes a database connection if one does not exist, and closes the database connection when the command completes. Command syntax:
START HADR ON DATABASE DB database-alias
Command parameters: DATABASE database-alias Identifies the database on which HADR operations are to start. USER user-name Identifies the user name under which the HADR operations are to be started. USING password The password used to authenticate user-name. AS PRIMARY Specifies that HADR primary operations are to be started on the database. BY FORCE Specifies that the HADR primary database will not wait for the standby database to connect to it. After a start BY FORCE, the primary database will still accept valid connections from the standby database whenever the standby later becomes available. When BY FORCE is used, the database will perform crash recovery if necessary, regardless of the value of database configuration parameter AUTORESTART. Other methods of starting a primary database (such as non-forced START HADR command, ACTIVATE DATABASE command, or client connection) will respect the AUTORESTART setting. Caution: Use the START HADR command with the AS PRIMARY BY FORCE option with caution. If the standby database has been changed to a primary and the original primary database is restarted by issuing the START HADR command with the AS PRIMARY BY FORCE option, both
734
Command Reference
START HADR
copies of your database will be operating independently as primaries. (This is sometimes referred to as split brain or dual primary.) In this case, each primary database can accept connections and perform transactions, and neither receives and replays the updates made by the other. As a result, the two copies of the database will become inconsistent with each other. AS STANDBY Specifies that HADR standby operations are to be started on the database. The standby database will attempt to connect to the HADR primary database until a connection is successfully established, or until the connection attempt is explicitly rejected by the primary. (The connection might be rejected by the primary database if an HADR configuration parameter is set incorrectly or if the database copies are inconsistent, both conditions for which continuing to retry the connection is not appropriate.) Usage notes: The following table shows database behavior in various conditions:
Behavior upon START HADR command with the AS PRIMARY option Behavior upon START HADR command with the AS STANDBY option Database starts as an standby database if it is in rollforward-pending mode (which can be the result of a restore or a split mirror) or in rollforward in-progress mode. Otherwise, an error is returned. Error message returned. After a failover, this reintegrates the failed primary into the HADR pair as the new standby database. Some restrictions apply. Error message returned. Starts the database as the standby database. Warning message issued.
Database status
When issuing the START HADR command, the corresponding error codes might be generated: SQL01767N, SQL01769N, or SQL01770N with a reason code of 98. The reason code indicates that there is no installed license for HADR on the server where the command was issued. To correct the problem, install a valid HADR license using the db2licm or install a version of the server that contains a valid HADR license as part of its distribution. Related tasks: v Initializing high availability disaster recovery (HADR) in Data Recovery and High Availability Guide and Reference
735
PROFILE profile
db2stop
Command parameters: PROFILE profile partitioned database systems only. Specifies the name of the profile file that was executed at startup to define the DB2 environment for those database partitions that were started. If a profile for the START DATABASE
736
Command Reference
Usage notes: It is not necessary to issue this command on a client node. It is provided for compatibility with older clients, but it has no effect on the database manager. Once started, the database manager instance runs until the user stops it, even if all application programs that were using it have ended. If the database manager is stopped, a successful completion message is sent to the standard output device. If an error occurs, processing stops, and an error message is sent to the standard output device. If the database manager cannot be stopped because application programs are still connected to databases, use the FORCE APPLICATION command to disconnect all users first, or reissue the STOP DATABASE MANAGER command with the FORCE option.
737
738
Command Reference
STOP HADR
STOP HADR
Stops HADR operations for a database. Authorization: One of the following: v sysadm v sysctrl v sysmaint Required connection: Instance. The command establishes a database connection if one does not exist, and closes the database connection when the command completes. Command syntax:
STOP HADR ON DATABASE DB database-alias
Command parameters: DATABASE database-alias Identifies the database on which HADR operations are to stop. USER user-name Identifies the user name under which the HADR operations are to be stopped. USING password The password used to authenticate user-name. Usage notes: The following table shows database behavior in various conditions:
Database status Inactive standard database Active standard database Inactive primary database Behavior upon STOP HADR command Error message returned. Error message returned. Database role changes to standard. Database configuration parameter hadr_db_role is updated to STANDARD. Database remains offline. At the next restart, enters standard role.
739
STOP HADR
Database status Active primary database Behavior upon STOP HADR command Stops shipping logs to the HADR standby database and shuts down all HADR EDUs on the HADR primary database. Database role changes to standard and database remains online. Database remains in standard role until an explicit START HADR command with the AS PRIMARY option is issued. Open sessions and transactions are not affected by the STOP HADR command. You can repeatedly issue STOP HADR and START HADR commands while the database remains online. These commands take effect dynamically. Database role changes to standard. Database configuration parameter hadr_db_role is updated to STANDARD. Database remains offline. Database is put into rollforward pending mode. Error message returned: Deactivate the standby database before attempting to convert it to a standard database.
When issuing the STOP HADR command, the corresponding error codes might be generated: SQL01767N, SQL01769N, or SQL01770N with a reason code of 98. The reason code indicates that there is no installed license for HADR on the server where the command was issued. To correct the problem, install a valid HADR license using the db2licm or install a version of the server that contains a valid HADR license as part of its distribution. Related tasks: v Stopping high availability disaster recovery (HADR) in Data Recovery and High Availability Guide and Reference
740
Command Reference
TAKEOVER HADR
TAKEOVER HADR
Instructs an HADR standby database to take over as the new HADR primary database for the HADR pair. Authorization: One of the following: v sysadm v sysctrl v sysmaint Required connection: Instance. The command establishes a database connection if one does not exist, and closes the database connection when the command completes. Command syntax:
TAKEOVER HADR ON DATABASE DB database-alias
BY FORCE
Command parameters: DATABASE database-alias Identifies the current HADR standby database that should take over as the HADR primary database. USER user-name Identifies the user name under which the takeover operation is to be started. USING password The password used to authenticate user-name. BY FORCE Specifies that the database will not wait for confirmation that the original HADR primary database has been shut down. This option is required if the HADR pair is not in peer state. Usage notes: The following table shows the behavior of the TAKEOVER HADR command when issued on an active standby for each possible state and option combination. An error message is returned if this command is issued on an inactive standby database.
Standby state Local catchup or remote catchup BY FORCE option used Takeover behavior No Error message returned
741
TAKEOVER HADR
Standby state Local catchup or remote catchup Peer BY FORCE option used Takeover behavior Yes Error message returned
No
Primary database and standby database switch roles. If no failure is encountered during takeover, there will be no data loss. However, if failures are encountered during takeover, data loss might occur and the roles of the primary and standby might or might not have been changed. The following is a guideline for handling failures during a takeover in which the primary and standby switch roles: 1. If a failure occurs during a takeover operation, the roles of the HADR databases might or might not have been changed. If possible, make sure both databases are online. Check the HADR role of the available database or databases using the Snapshot Monitor, or by checking the value of the database configuration parameter hadr_db_role. 2. If the intended new primary is still in standby role, and takeover is still desired, re-issue the TAKEOVER HADR command (see the next guideline regarding the BY FORCE option). 3. It is possible to end up with both databases in standby role. In that case, the TAKEOVER HADR command with the BY FORCE option can be issued at whichever node should now become the primary. The BY FORCE option is required in this case because the two standbys cannot establish the usual HADR primary-standby connection.
Peer
Yes
The standby notifies the primary to shut itself (the primary) down. The standby stops receiving logs from the primary, finishes replaying the logs it has already received, and then becomes a primary. The standby does not wait for any acknowledgement from the primary to confirm that it has received the takeover notification or that it has shut down. Because of this, if the primary is processing transactions at the time of the takeover, it is unlikely that it will be able to be later restarted as a standby. It is recommended that you shut down the primary database first before issuing a TAKEOVER HADR command with the BY FORCE option. Error message returned.
No
Yes
When issuing the TAKEOVER HADR command, the corresponding error codes might be generated: SQL01767N, SQL01769N, or SQL01770N with a reason code of 98. The reason code indicates that there is no installed license for HADR on the server where the command was issued. To correct the problem, install a valid HADR license using the db2licm or install a version of the server that contains a valid HADR license as part of its distribution. Related tasks:
742
Command Reference
TAKEOVER HADR
v Switching database roles in high availability disaster recovery (HADR) in Data Recovery and High Availability Guide and Reference
743
TERMINATE
TERMINATE
Explicitly terminates the command line processors back-end process. Authorization: None Required connection: None Command syntax:
TERMINATE
Command parameters: None Usage notes: If an application is connected to a database, or a process is in the middle of a unit of work, TERMINATE causes the database connection to be lost. An internal commit is then performed. Although TERMINATE and CONNECT RESET both break the connection to a database, only TERMINATE results in termination of the back-end process. It is recommended that TERMINATE be issued prior to executing the db2stop command. This prevents the back-end process from maintaining an attachment to a database manager instance that is no longer available. Back-end processes in MPP systems must also be terminated when the DB2NODE environment variable is updated in the session. This environment variable is used to specify the coordinator database partition number within an MPP multiple logical node configuration. Related reference: v db2stop - Stop DB2 on page 272
744
Command Reference
UNCATALOG DATABASE
UNCATALOG DATABASE
Deletes a database entry from the system database directory. Authorization: One of the following: v sysadm v sysctrl Required connection: None. Directory operations affect the local directory only. Command syntax:
UNCATALOG DATABASE DB database-alias
Command parameters: DATABASE database-alias Specifies the alias of the database to uncatalog. Usage notes: Only entries in the local database directory can be uncataloged. Entries in the system database directory can be deleted using the DROP DATABASE command. To recatalog the database on the instance, use the UNCATALOG DATABASE and CATALOG DATABASE commands. command. To list the databases that are cataloged on a node, use the LIST DATABASE DIRECTORY command. The authentication type of a database, used when communicating with a down-level server, can be changed by first uncataloging the database, and then cataloging it again with a different type. If directory caching is enabled, database, node, and DCS directory files are cached in memory. See the information for the configuration parameter dir_cache in the GET DATABASE MANAGER CONFIGURATION command. An applications directory cache is created during its first directory lookup. Because the cache is only refreshed when the application modifies any of the directory files, directory changes made by other applications might not be effective until the application has restarted. To refresh the CLPs directory cache, use the TERMINATE command. To refresh DB2s shared cache, stop (db2stop) and then restart (db2start) the database. To refresh the directory cache for another application, stop and then restart that application. Related reference: v sqleuncd API - Uncatalog a database from the system database directory in Administrative API Reference v CATALOG DATABASE on page 372 v DROP DATABASE on page 426
Chapter 3. CLP Commands
745
UNCATALOG DATABASE
v GET DATABASE MANAGER CONFIGURATION on page 463 v LIST DATABASE DIRECTORY on page 523 v TERMINATE on page 744
746
Command Reference
Command parameters: DATABASE database-alias Specifies the alias of the DCS database to uncatalog. Usage notes: DCS databases are also cataloged in the system database directory as remote databases and can be uncataloged using the UNCATALOG DATABASE command. To recatalog a database in the DCS directory, use the UNCATALOG DCS DATABASE and CATALOG DCS DATABASE commands. To list the DCS databases that are cataloged on a node, use the LIST DCS DIRECTORY command. If directory caching is enabled, database, node, and DCS directory files are cached in memory. See the information provided for the configuration parameter dir_cache in the output of the GET DATABASE MANAGER CONFIGURATION command. An applications directory cache is created during its first directory lookup. Since the cache is only refreshed when the application modifies any of the directory files, directory changes made by other applications might not be effective until the application has restarted. To refresh the CLPs directory cache, use the TERMINATE command. To refresh DB2s shared cache, stop (db2stop) and then restart (db2start) the database. To refresh the directory cache for another application, stop and then restart that application. Related reference: v sqlegdel API - Uncatalog a database from the database connection services (DCS) directory in Administrative API Reference v CATALOG DCS DATABASE on page 375 v GET DATABASE MANAGER CONFIGURATION on page 463 v TERMINATE on page 744 v UNCATALOG DATABASE on page 745
Chapter 3. CLP Commands
747
748
Command Reference
Command parameters: DATABASE dbalias Specifies the alias of the LDAP database to uncatalog. USER username Specifies the users LDAP distinguished name (DN). The LDAP user DN must have sufficient authority to delete the object from the LDAP directory. If the users LDAP DN is not specified, the credentials of the current logon user will be used. PASSWORD password Account password. Usage notes: When a database is dropped, the database object is removed from LDAP. The database is also automatically deregistered from LDAP when the database server that manages the database is deregistered from LDAP. It might, however, be necessary to manually uncatalog the database from LDAP if: v The database server does not support LDAP. The administrator must manually uncatalog each database from LDAP after the database is dropped. v During DROP DATABASE the database object cannot be removed from LDAP (because LDAP cannot be accessed). In this case, the database is still removed from the local machine, but the existing entry in LDAP is not deleted. Related concepts: v Lightweight Directory Access Protocol (LDAP) overview in Administration Guide: Implementation Related tasks: v Deregistering the database from the LDAP directory in Administration Guide: Implementation
749
750
Command Reference
Command parameters: NODE nodename Specifies the name of the node to uncatalog. USER username Specifies the users LDAP distinguished name (DN). The LDAP user DN must have sufficient authority to delete the object from the LDAP directory. If the users LDAP DN is not specified, the credentials of the current logon user will be used. PASSWORD password Account password. Usage notes: The LDAP node is automatically uncataloged when the DB2 server is deregistered from LDAP. Related concepts: v Lightweight Directory Access Protocol (LDAP) overview in Administration Guide: Implementation Related reference: v db2LdapUncatalogNode API - Delete alias for node name from LDAP server in Administrative API Reference v CATALOG LDAP DATABASE on page 377 v UNCATALOG LDAP DATABASE on page 749 v CATALOG LDAP NODE on page 381
751
UNCATALOG NODE
UNCATALOG NODE
Deletes an entry from the node directory. Authorization: One of the following: v sysadm v sysctrl Required connection: None. Directory operations affect the local directory only. Command syntax:
UNCATALOG NODE nodename
Command parameters: NODE nodename Specifies the node entry being uncataloged. Usage notes: UNCATALOG NODE can be executed on any type of node, but only the local directory is affected, even if there is an attachment to a remote instance, or a different local instance. If directory caching is enabled, database, node, and DCS directory files are cached in memory. An applications directory cache is created during its first directory lookup. Since the cache is only refreshed when the application modifies any of the directory files, directory changes made by other applications might not be effective until the application has restarted. To refresh the CLPs directory cache, use TERMINATE. To refresh DB2s shared cache, stop (db2stop) and then restart (db2start) the database. To refresh the directory cache for another application, stop and then restart that application. Related reference: v sqleuncn API - Uncatalog an entry from the node directory in Administrative API Reference v CATALOG LOCAL NODE on page 382 v CATALOG NAMED PIPE NODE on page 384 v CATALOG TCPIP/TCPIP4/TCPIP6 NODE on page 387 v GET DATABASE MANAGER CONFIGURATION on page 463 v TERMINATE on page 744
752
Command Reference
Command parameters: USER Uncatalog a user data source. This is the default if no keyword is specified. SYSTEM Uncatalog a system data source. ODBC DATA SOURCE data-source-name Specifies the name of the data source to be uncataloged. Maximum length is 32 characters. Related reference: v CATALOG ODBC DATA SOURCE on page 386 v LIST ODBC DATA SOURCES on page 544
753
UNQUIESCE
UNQUIESCE
Restores user access to instances or databases which have been quiesced for maintenance or other reasons. UNQUIESCE restores user access without necessitating a shutdown and database restart. Unless specifically designated, no user except those with sysadm, sysmaint, or sysctrl has access to a database while it is quiesced. Therefore an UNQUIESCE is required to restore general access to a quiesced database. Scope: UNQUIESCE DB restores user access to all objects in the quiesced database. UNQUIESCE INSTANCE instance-name restores user access to the instance and the databases in the instance instance-name. To stop the instance and unquiesce it and all its databases, issue the db2stop command. Stopping and restarting DB2 will unquiesce all instances and databases. Authorization: One of the following: For database level unquiesce: v sysadm v dbadm For instance level unquiesce: v sysadm v sysctrl Command syntax:
UNQUIESCE DB INSTANCE instance-name
Required connection: Database (Database connection is not required for an instance unquiesce.) Command parameters: DB Unquiesce the database. User access will be restored to all objects in the database.
INSTANCE instance-name Access is restored to the instance instance-name and the databases in the instance. Examples: Unquiescing a Database
db2 unquiesce db
754
Command Reference
UNQUIESCE
This command will unquiesce the database that had previously been quiesced. Related reference: v QUIESCE on page 612 v db2DatabaseUnquiesce API - Unquiesce database in Administrative API Reference v db2InstanceUnquiesce API - Unquiesce instance in Administrative API Reference v UNQUIESCE DATABASE command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
755
The following DAS configuration parameters can be specified originally and then later changed while the DAS is online: v DAS Discovery Mode - discover v SMTP Server - smtp_server v Java Development Kit Installation Path DAS - jdk_path v Location of Contact List - contact_host v DAS Code Page - das_codepage v DAS Territory - das_territory For more information about these parameters, see individual parameter descriptions. Scope: Issue this command from each administration node to specify or change parameter settings for that node. Authorization: dasadm Required connection: Node. To update the DAS configuration for a remote system, use the FOR NODE option with the administrator node name. Command syntax:
UPDATE ADMIN
USING
config-keyword value
756
Command Reference
FOR NODE
Command parameters: USING config-keyword value Specifies the admin configuration parameter to be updated. FOR NODE Enter the name of an administration node to update the DAS configuration parameters there. USER username USING password If connection to the administration node requires user name and password authorization, enter this information. Usage notes: To view or print a list of the DAS configuration parameters, use GET ADMIN CONFIGURATION. To reset the DAS configuration parameters to the recommended DAS defaults, use RESET ADMIN CONFIGURATION. When configuration parameters take effect depends on whether you change a standard configuration parameter or one of the parameters that can be reset online. Standard configuration parameter values are reset when you execute the db2admin command. If an error occurs, the DAS configuration file is not changed. In order to update the DAS configuration using UPDATE ADMIN CONFIGURATION, you must use the command line processor from an instance that is at the same installed level as the DAS. The DAS configuration file cannot be updated if the checksum is invalid. This might occur if you change the DAS configuration file manually, without using the appropriate command. If this happens, you must drop and re-create the DAS to reset its configuration file. Related reference: v GET ADMIN CONFIGURATION on page 441 v RESET ADMIN CONFIGURATION on page 663 v Configuration parameters summary in Performance Guide
757
ON database alias
, SET parameter name value , , UPDATE ACTION SCRIPT pathname TASK name ON WARNING ALARM ALLALERT ATTENTION state SET parameter name value
, DELETE ACTION SCRIPT pathname TASK name ON WARNING ALARM ALLALERT ATTENTION state
, ADD ACTION SCRIPT pathname <Add Script Details> TASK name ON WARNING ALARM ALLALERT ATTENTION state USER username USING password ON hostname
Command Parameters: DATABASE MANAGER Updates alert settings for the database manager. DATABASES Updates alert settings for all databases managed by the database manager.
758
Command Reference
SET parameter-name value Updates the alert configuration element, parameter-name, of the health indicator to the specified value. parameter-name must be one of the following: v ALARM: the value is a health indicator unit. v WARNING: the value is a health indicator unit. v SENSITIVITY: the value is in seconds. v ACTIONSENABLED: the value can be either YES or NO. v THRESHOLDSCHECKED: the value can be either YES or NO. UPDATE ACTION SCRIPT pathname ON [WARNING | ALARM | ALLALERT | ATTENTION state] Specifies that the script attributes of the predefined script with absolute pathname pathname will be updated according to the following clause:
759
760
Command Reference
Related tasks: v Configuring health indicators using a client application in System Monitor Guide and Reference Related reference: v ADMIN_CMD procedure Run administrative commands in Administrative SQL Routines and Views v db2UpdateAlertCfg API - Update the alert configuration settings for health indicators in Administrative API Reference v UPDATE ALERT CONFIGURATION command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
761
Command parameters: DATABASE database-alias Specifies the alias of the database where the alternate server is to be updated. HOSTNAME hostname Specifies a fully qualified host name or the IP address of the node where the alternate server for the database resides. PORT port-number Specifies the port number of the alternate server of the database manager instance. Examples: The following example updates the alternate server for the SAMPLE database using host name montero and port 20396:
db2 update alternate server for database sample using hostname montero port 20396
The following two examples reset the alternate server for the SAMPLE database:
db2 update alternate server for database sample using hostname NULL port NULL
or
db2 update alternate server for database sample using hostname "" port NULL
Usage notes: v This command is only applied to the system database directory. v This command should only be used on a server instance. If it is issued on a client instance, it is ignored and message SQL1889W is returned.
762
Command Reference
763
USING
Command parameters: DATABASE database-alias Specifies the alias of the database to be updated. NODE node Specifies the node name where the alternate server for the database resides. GWNODE gwnode Specifies the node name where the alternate gateway for the database resides. USER username Specifies the users LDAP distinguished name (DN). The LDAP user DN must have sufficient authority to create the object in the LDAP directory. If the users LDAP DN is not specified, the credentials of the current logon user will be used. If the users LDAP DN and password have been specified using db2ldcfg, the user name and password do not have to be specified here. PASSWORD password Account password. If the users LDAP DN and password have been specified using db2ldcfg, the user name and password do not have to be specified here. Related tasks: v Rerouting LDAP clients to another server in Administration Guide: Implementation Related reference: v CATALOG LDAP DATABASE on page 377 v db2ldcfg - Configure LDAP environment on page 140
764
Command Reference
765
AT
GLOBAL USER
LEVEL
FOR SECTION
section-name USING
keyword value
Command parameters: FOR SECTION section-name Name of the section whose keywords are to be updated. If the specified section does not exist, a new section is created. AT GLOBAL LEVEL Specifies that the CLI configuration parameter is to be updated at the global level. This parameter is only applicable when LDAP support is enabled. AT USER LEVEL Specifies that the CLI configuration parameter is to be updated at the user level. If LDAP support is enabled, this setting will be consistent when logging on to different machines with the same LDAP user ID. If LDAP support is disabled, this setting will be consistent only when logging on to the same machine with the same operating system user ID. USING keyword value Specifies the CLI/ODBC parameter to be updated. Usage notes: The section name and the keywords specified on this command are not case sensitive. However, the keyword values are case sensitive. If a keyword value is a string containing single quotation marks or imbedded blanks, the entire string must be delimited by double quotation marks. For example:
766
Command Reference
When the AT USER LEVEL keywords are specified, the CLI configuration parameters for the specified section are updated only for the current user; otherwise, they are updated for all users on the local machine. The CLI configuration at the user level is maintained in the LDAP directory and cached on the local machine. When reading the CLI configuration, DB2 always reads from the cache. The cache is refreshed when: v The user updates the CLI configuration. v The user explicitly forces a refresh of the CLI configuration using the REFRESH LDAP command. In an LDAP environment, users can configure a set of default CLI settings for a database catalogued in the LDAP directory. When an LDAP cataloged database is added as a DSN (Data Source Name), either by using the CA (Configuration Assistant) or the ODBC configuration utility, any default CLI settings, if they exist in the LDAP directory, will be configured for that DSN on the local machine. The AT GLOBAL LEVEL clause must be specified to configure a CLI parameter as a default setting. Related reference: v GET CLI CONFIGURATION on page 451 v REFRESH LDAP on page 636
767
option-letter
ON value OFF
Command parameters: USING option-letter The following option-letters can be set: a c d e i l m n o p q r s v w z Display SQLCA Auto-commit SQL statements Display the XML declarations of XML data Display SQLCODE/SQLSTATE Display XQuery results with proper indentation Log commands in a history file Display the number of rows affected by INSERT, DELETE, UPDATE or MERGE statements. Remove new line character Display to standard output Display DB2 interactive prompt Preserve whitespace and line feeds in strings delimited with single or double quotes Save output report to a file Stop execution on command error Echo current command Show SQL statement warning messages Redirect all output to a file.
ON value The e, l, r, and z options require a value if they are turned on. For the e option, value can be c to display the SQLCODE, or s to display the
768
Command Reference
769
UPDATE CONTACT
UPDATE CONTACT
Updates the attributes of a contact that is defined on the local system. A contact is a user to whom the Scheduler and Health Monitor send messages. To create a contact, use the ADD CONTACT command. The setting of the Database Administration Server (DAS) contact_host configuration parameter determines whether the list is local or global. Authorization: None. Required connection: None. Local execution only: this command cannot be used with a remote connection. Command syntax:
, UPDATE CONTACT name USING keyword value
Command parameters: CONTACT name The name of the contact that will be updated. USING keyword value Specifies the contact parameter to be updated (keyword) and the value to which it will be set (value). The valid set of keywords is: ADDRESS The email address that is used by the SMTP server to send the notification. TYPE Whether the address is for an email address or a pager.
MAXPAGELEN The maximum number of characters that the pager can accept. DESCRIPTION A textual description of the contact. This has a maximum length of 128 characters. Related reference: v db2UpdateContact API - Update the attributes of a contact in Administrative API Reference v UPDATE CONTACT command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
770
Command Reference
UPDATE CONTACTGROUP
UPDATE CONTACTGROUP
Updates the attributes of a contact group that is defined on the local system. A contact group is a list of users who should be notified by the Scheduler and the Health Monitor. The setting of the Database Administration Server (DAS) contact_host configuration parameter determines whether the list is local or global. Authorization: None Required Connection: None Command Syntax:
UPDATE CONTACTGROUP name
Command Parameters: CONTACTGROUP name Name of the contact group which will be updated. ADD CONTACT name Specifies the name of the new contact to be added to the group. A contact can be defined with the ADD CONTACT command after it has been added to a group. DROP CONTACT name Specifies the name of a contact in the group that will be dropped from the group. ADD GROUP name Specifies the name of the new contact group to be added to the group. DROP GROUP name Specifies the name of a contact group that will be dropped from the group. DESCRIPTION new description Optional. A new textual description for the contact group. Related reference: v db2UpdateContactGroup API - Update the attributes of a contact group in Administrative API Reference v UPDATE CONTACTGROUP command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
771
FOR database-alias
Command parameters: DEFERRED Make the changes only in the configuration file, so that the changes take effect the next time you reactivate the database. FOR database-alias Specifies the alias of the database whose configuration is to be updated. Specifying the database alias is not required when a database connection has already been established. You can update the configuration file for another database residing under the same database instance. For example, if you are connected only to database db11, and issue update db config for alias db22 using .... immediate: v If there is no active connection on db22, the update will be successful because only the configuration file needs to be updated. A new connection (which will activate the database) will see the new change in memory.
772
Command Reference
When changing configuration parameters in a multiple-partitioned database environment, the db2_all command should be used. Using the db2_all command results in the update being issued against all database partitions. For example, to change the logging mode to archival logging in a multiple-partitioned database environment containing a database called zellmart, use:
db2_all ";db2 update db cfg for zellmart using logretain recovery"
To check that the logretain configuration parameter has changed on all database partitions, use:
db2_all ";db2 get db cfg for zellmart"
Optionally, you can leverage the SYSIBMADM.DBCFG view to get data from all partitions without having to use db2_all. If you are working on a UNIX operating system, and you have the grep command, you can use the following command to view only the logretain values:
db2_all ";db2 get db cfg for zellmart | grep -i logretain"
For more information about DB2 configuration parameters and the values available for each type of database node, see the individual configuration parameter descriptions. The values of these parameters differ for each type of database node configured (server, client, or server with remote clients). Not all parameters can be updated.
Chapter 3. CLP Commands
773
If an error occurs, the database configuration file does not change. The database configuration file cannot be updated if the checksum is invalid. This might occur if the database configuration file is changed without using the appropriate command. If this happens, the database must be restored to reset the database configuration file. Related concepts: v rah and db2_all commands overview in Administration Guide: Implementation Related tasks: v Configuring DB2 with configuration parameters in Performance Guide Related reference: v db2CfgSet API - Set the database manager or database configuration parameters in Administrative API Reference v Configuration parameters summary in Performance Guide v GET DATABASE CONFIGURATION on page 457 v RESET DATABASE CONFIGURATION on page 667 v DBCFG administrative view Retrieve database configuration parameter information in Administrative SQL Routines and Views v UPDATE DATABASE CONFIGURATION command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
774
Command Reference
Command parameters: DEFERRED Make the changes only in the configuration file, so that the changes take effect when the instance is restarted. IMMEDIATE Make the changes right now, dynamically, while the instance is running. IMMEDIATE is the default, but it requires an instance attachment to be effective. USING config-keyword value Specifies the database manager configuration parameter to be updated. For a list of configuration parameters, refer to the configuration parameters summary. Usage notes: To view or print a list of the database manager configuration parameters, use the GET DATABASE MANAGER CONFIGURATION command. To reset the database manager configuration parameters to the recommended database manager defaults, use the RESET DATABASE MANAGER CONFIGURATION command. For more information about database manager configuration parameters and the values of these parameters appropriate for each type of database node configured (server, client, or server with remote clients), see individual configuration parameter descriptions. Not all parameters can be updated.
Chapter 3. CLP Commands
775
If an error occurs, the database manager configuration file does not change. The database manager configuration file cannot be updated if the checksum is invalid. This can occur if you edit database manager configuration file and do not use the appropriate command. If the checksum is invalid, you must reinstall the database manager to reset the database manager configuration file. When you update the SVCENAME, or TPNAME database manager configuration parameters for the current instance, if LDAP support is enabled and there is an LDAP server registered for this instance, the LDAP server is updated with the new value or values. Related tasks: v Configuring DB2 with configuration parameters in Performance Guide Related reference: v db2CfgSet API - Set the database manager or database configuration parameters in Administrative API Reference v Configuration parameters summary in Performance Guide v GET DATABASE MANAGER CONFIGURATION on page 463 v RESET DATABASE MANAGER CONFIGURATION on page 669 v TERMINATE on page 744 v DBMCFG administrative view Retrieve database manager configuration parameter information in Administrative SQL Routines and Views v UPDATE DATABASE MANAGER CONFIGURATION command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
776
Command Reference
Command Parameters: ADD GROUP name Add a new contact group that will notified of the health of the instance. ADD CONTACT name Add a new contact that will notified of the health of the instance. DROP GROUP name Removes the contact group from the list of contacts that will notified of the health of the instance. DROP CONTACT name Removes the contact from the list of contacts that will notified of the health of the instance. Related tasks: v Enabling health alert notification in System Monitor Guide and Reference Related reference: v ADMIN_CMD procedure Run administrative commands in Administrative SQL Routines and Views v db2UpdateHealthNotificationList API - Update the list of contacts to whom health alert notifications can be sent in Administrative API Reference v UPDATE HEALTH NOTIFICATION CONTACT LIST command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
777
UPDATE HISTORY
UPDATE HISTORY
Updates the location, device type, comment, or status in a history file entry. Authorization: One of the following: v sysadm v sysctrl v sysmaint v dbadm Required connection: Database Command syntax:
UPDATE HISTORY FOR object-part EID eid WITH
new-device-type
Command parameters: FOR object-part Specifies the identifier for the history entry to be updated. It is a time stamp with an optional sequence number from 001 to 999. This parameter cannot be used to update the entry status. To update the entry status, specify an EID instead. EID eid Specifies the history entry ID. LOCATION new-location Specifies the new physical location of a backup image. The interpretation of this parameter depends on the device type. DEVICE TYPE new-device-type Specifies a new device type for storing the backup image. Valid device types are: D K T A U P N X Q Disk Diskette Tape TSM User exit Pipe Null device XBSA SQL statement
778
Command Reference
UPDATE HISTORY
O Other
COMMENT new-comment Specifies a new comment to describe the entry. STATUS new-status Specifies a new status for an entry. Only backup entries can have their status updated. Valid values are: A I E Active. Most entries are active. Inactive. Backup images that are no longer on the active log chain become inactive. Expired. Backup images that are no longer required because there are more than NUM_DB_BACKUPS active images are flagged as expired. Deleted. Backup images that are no longer available for recovery should be marked as having been deleted.
Example: To update the history file entry for a full database backup taken on April 13, 1997 at 10:00 a.m., enter:
db2 update history for 19970413100000001 with location /backup/dbbackup.1 device type d
Usage notes: The primary purpose of the database history file is to record information, but the data contained in the history is used directly by automatic restore operations. During any restore where the AUTOMATIC option is specified, the history of backup images and their locations will be referenced and used by the restore utility to fulfill the automatic restore request. If the automatic restore function is to be used and backup images have been relocated since they were created, it is recommended that the database history record for those images be updated to reflect the current location. If the backup image location in the database history is not updated, automatic restore will not be able to locate the backup images, but manual restore commands can still be used successfully. Related concepts: v Developing a backup and recovery strategy in Data Recovery and High Availability Guide and Reference Related reference: v ADMIN_CMD procedure Run administrative commands in Administrative SQL Routines and Views v UPDATE HISTORY command using the ADMIN_CMD procedure in Administrative SQL Routines and Views
779
SVCENAME svcename
WITH comments
Command parameters: NODE nodename Specifies the node name when updating a remote DB2 server. The node name is the value specified when registering the DB2 server in LDAP. HOSTNAME hostname or IP address Specifies the TCP/IP host name or IP address. v If it is a TCPIP node, the host name will be resolved to an IPv4 or IPv6 address. v If it is a TCPIP4 node, the host name will be resolved to an IPv4 address only. v If it is a TCPIP6 node, the host name will be resolved to an IPv6 address only. SVCENAME svcename Specifies the TCP/IP service name or port number. WITH comments Describes the DB2 server. Any comment that helps to describe the server registered in the network directory can be entered. Maximum length is 30 characters. A carriage return or a line feed character is not permitted. The comment text must be enclosed by double quotation marks. USER username Specifies the users LDAP distinguished name (DN). The LDAP user DN must have sufficient authority to create and update the object in the LDAP directory. If the users LDAP DN is not specified, the credentials of the current logon user will be used.
780
Command Reference
781
switch-name
ON OFF
AT DBPARTITIONNUM GLOBAL
db-partition-number
Command parameters: USING switch-name The following switch names are available: BUFFERPOOL Buffer pool activity information LOCK SORT Lock information Sorting information
782
Command Reference
AT DBPARTITIONNUM db-partition-number Specifies the database partition for which the status of the monitor switches is to be displayed. global Returns an aggregate result for all database partitions in a partitioned database system. Usage notes: Information is collected by the database manager only after a switch is turned on. The switches remain set until db2stop is issued, or the application that issued the UPDATE MONITOR SWITCHES command terminates. To clear the information related to a particular switch, set the switch off, then on. Updating switches in one application does not affect other applications. To view the switch settings, use the GET MONITOR SWITCHES command. Compatibilities: For compatibility with versions earlier than Version 8: v The keyword NODE can be substituted for DBPARTITIONNUM. Related concepts: v System monitor switches in System Monitor Guide and Reference Related tasks: v Setting monitor switches from a client application in System Monitor Guide and Reference Related reference: v GET SNAPSHOT on page 487 v GET MONITOR SWITCHES on page 478 v db2MonitorSwitches API - Get or update the monitor switch settings in Administrative API Reference
783
784
Command Reference
CLOSE
CLOSE cursor-name
CONNECT
CONNECT TO server-name lock-block RESET (1) authorization authorization
785
lock-block:
IN SHARE MODE IN EXCLUSIVE MODE ON SINGLE NODE
DECLARE CURSOR
DECLARE cursor-name CURSOR WITH HOLD
DATABASE dbname USER user USING password FOR select-statement XQUERY xquery-statement
FETCH
FETCH FROM cursor-name
FOR LOB
OPEN
OPEN cursor-name
Notes: 1. When CALL is issued: v An expression must be used for each IN or INOUT parameter of the procedure. For an INOUT parameter, the expression must be a single literal value. The INOUT XML parameters must be either NULL (if nullable) or in the following format: XMLPARSE(DOCUMENT <string>). Note that the
786
Command Reference
To call the procedure PROC4 from the command line processor, issue a CALL statement:
CALL PROC4(XMLPARSE(DOCUMENT <a>111</a>), XMLPARSE(DOCUMENT <a>222</a>), ?)
2. The CLP version of CONNECT permits the user to change the password, using the following parameters: NEW password Specifies the new password that is to be assigned to the user name. Passwords can be up to 18 characters in length. The system on which the password will be changed depends on how user authentication has been set up. CONFIRM password A string that must be identical to the new password. This parameter is used to catch entry errors. CHANGE PASSWORD If this option is specified, the user is prompted for the current password, a new password, and for confirmation of the new password. Passwords are not displayed at entry. 3. The DATABASE clause in the DECLARE CURSOR statement is only applicable when the cursor is being used for a subsequent load from cursor operation. 4. To use the DECLARE CURSOR statement with an XQuery statement, users must prefix the XQuery statement with the keyword XQUERY explicitly. 5. When FETCH is issued through the command line processor, decimal and floating-point numbers are displayed with the territorys decimal delimiter, that is, a period (.) in the U.S., Canada, and the U.K.; a comma (,) in most
Chapter 4. Using command line SQL statements and XQuery statements
787
7. A new LOB option has been added to FETCH. If the LOB clause is specified, only the next row is fetched: v When SELECT is issued through the command line processor to query tables containing LOB columns, all columns are truncated to 8KB in the output. v Each LOB column value is fetched into a file with the name filename.xxx, where filename is specified in the LOB clause, and xxx is a file extension from 001 to 999 (001 is the first LOB column in the select list of the corresponding DECLARE CURSOR statement, 002 is the second LOB column, and 999 is the 999th column). The maximum number of LOB columns that can be fetched into files is 999. v Names of the files containing the data are displayed in the LOB columns. 8. The command line processor displays BLOB columns in hexadecimal representation. 9. SQL statements that contain references to structured type columns cannot be issued if an appropriate transform function is not available. 10. A CLP imposed limit of 64K for SQL statements and for CLP commands that contain SQL statement components has now been removed. 11. XML data, retrieved via SELECT, CALL or XQuery, is truncated to 4000 bytes in the output. To change the way that the CLP displays data (when querying databases using SQL statements through the CLP), rebind the CLP bind files against the database being queried. For example, to display date and time in ISO format, do the following: 1. Create a text file containing the names of the CLP bind files. This file is used as the list file for binding multiple files with one BIND command. In this example the file is named clp.lst, and its contents are:
db2clpcs.bnd db2clprr.bnd db2clpur.bnd db2clprs.bnd db2clpns.bnd + + + +
788
Command Reference
SQL Statement ALLOCATE CURSOR assignment statement ASSOCIATE LOCATORS ALTER { BUFFERPOOL, NICKNAME,9 NODEGROUP, SERVER,9 TABLE, TABLESPACE, USER MAPPING,9 TYPE, VIEW } BEGIN DECLARE SECTION2 CALL CASE statement CLOSE COMMENT ON COMMIT Compound SQL (Embedded) compound statement CONNECT (Type 1) CONNECT (Type 2) CREATE { ALIAS, BUFFERPOOL, DISTINCT TYPE, EVENT MONITOR, FUNCTION, FUNCTION MAPPING,9 INDEX, INDEX EXTENSION, METHOD, NICKNAME,9 NODEGROUP, PROCEDURE, SCHEMA, SERVER, TABLE, TABLESPACE, TRANSFORM, TYPE MAPPING,9 TRIGGER, USER MAPPING,9 TYPE, VIEW, WRAPPER9 } DECLARE CURSOR2 DECLARE GLOBAL TEMPORARY TABLE DELETE DESCRIBE
8
Dynamic1
SQL Procedure X X X
X X
X X X X X
X X X
X X X X X X
X X X
X X
X10
789
SQL Statement EXECUTE EXECUTE IMMEDIATE EXPLAIN FETCH FLUSH EVENT MONITOR FOR statement FREE LOCATOR GET DIAGNOSTICS GOTO statement GRANT IF statement INCLUDE INSERT ITERATE LEAVE statement LOCK TABLE LOOP statement OPEN PREPARE REFRESH TABLE RELEASE RELEASE SAVEPOINT RENAME TABLE RENAME TABLESPACE REPEAT statement RESIGNAL statement RETURN statement REVOKE ROLLBACK SAVEPOINT select-statement SELECT INTO SET CONNECTION SET CURRENT DEFAULT TRANSFORM GROUP SET CURRENT DEGREE SET CURRENT EXPLAIN MODE
2
Dynamic1
SQL Procedure X X X X
X X
X X
4
X X X
X X
X X X
X X
X X
X X
X X X X X X X X X
X X X
X X X
X X X X
X X X X
X SQLEndTran(), SQLTransact() X X X X X X
X X X X X X X
SQLSetConnection() X X X, SQLSetConnectAttr() X X X
790
Command Reference
SQL Statement SET CURRENT EXPLAIN SNAPSHOT SET CURRENT PACKAGESET SET CURRENT QUERY OPTIMIZATION SET CURRENT REFRESH AGE SET EVENT MONITOR STATE SET INTEGRITY SET PASSTHRU SET PATH SET SCHEMA SET SERVER OPTION
9 5 9
Dynamic1 X
SQL Procedure X
X X X X X X X X X
X X X X X X X X X
X X X X X X X X X
X X X
X X X X X X
SET transition-variable SIGNAL statement SIGNAL SQLSTATE UPDATE VALUES INTO WHENEVER
2 5
X X
X X
X X X X
WHILE statement
Notes: 1. You can code all statements in this list as static SQL, but only those marked with X as dynamic SQL. 2. You cannot execute this statement. 3. An X indicates that you can execute this statement using either SQLExecDirect() or SQLPrepare() and SQLExecute(). If there is an equivalent DB2 CLI function, the function name is listed. 4. Although this statement is not dynamic, with DB2 CLI you can specify this statement when calling either SQLExecDirect(), or SQLPrepare() and SQLExecute(). 5. You can only use this within CREATE TRIGGER statements. 6. You can only use the SQL DESCRIBE statement to describe output, whereas with DB2 CLI you can also describe input (using the SQLDescribeParam() function). 7. You can only use the SQL FETCH statement to fetch one row at a time in one direction, whereas with the DB2 CLI SQLExtendedFetch() and SQLFetchScroll() functions, you can fetch into arrays. Furthermore, you can fetch in any direction, and at any position in the result set. 8. The DESCRIBE SQL statement has a different syntax than that of the CLP DESCRIBE command. 9. Statement is supported only for federated database servers. 10. SQL procedures can only issue CREATE and DROP statements for indexes, tables, and views.
Related reference: v CONNECT (Type 2) statement in SQL Reference, Volume 2 v CLOSE statement in SQL Reference, Volume 2 v CONNECT (Type 1) statement in SQL Reference, Volume 2 v DECLARE CURSOR statement in SQL Reference, Volume 2 v FETCH statement in SQL Reference, Volume 2 v OPEN statement in SQL Reference, Volume 2 v SELECT statement in SQL Reference, Volume 2
Chapter 4. Using command line SQL statements and XQuery statements
791
792
Command Reference
The symbol indicates that the syntax is continued on the next line. The symbol indicates that the syntax is continued from the previous line. The symbol indicates the end of a syntax diagram.
Syntax fragments start with the symbol and end with the symbol. Required items appear on the horizontal line (the main path).
required_item
If an optional item appears above the main path, that item has no effect on execution, and is used only for readability.
optional_item required_item
If you can choose from two or more items, they appear in a stack. If you must choose one of the items, one item of the stack appears on the main path.
required_item required_choice1 required_choice2
If choosing one of the items is optional, the entire stack appears below the main path.
required_item optional_choice1 optional_choice2
793
An arrow returning to the left, above the main line, indicates an item that can be repeated. In this case, repeated items must be separated by one or more blanks.
required_item
repeatable_item
If the repeat arrow contains a comma, you must separate repeated items with a comma.
, required_item repeatable_item
A repeat arrow above a stack indicates that you can make more than one choice from the stacked items or repeat a single choice. Keywords appear in uppercase (for example, FROM). They must be spelled exactly as shown. Variables appear in lowercase (for example, column-name). They represent user-supplied names or values in the syntax. If punctuation marks, parentheses, arithmetic operators, or other such symbols are shown, you must enter them as part of the syntax. Sometimes a single variable represents a larger fragment of the syntax. For example, in the following diagram, the variable parameter-block represents the whole syntax fragment that is labeled parameter-block:
required_item parameter-block
parameter-block:
parameter1 parameter2
parameter3 parameter4
Adjacent segments occurring between large bullets (*) may be specified in any sequence.
required_item item1 * item2 * item3 * item4
794
Command Reference
795
796
Command Reference
797
798
Command Reference
generatedignore
generatedmissing
generatedoverride
799
identityoverride
indexfreespace=x
800
Command Reference
seclabelchar
801
usedefaults
802
Command Reference
803
implieddecimal
804
Command Reference
805
nullindchar=x
806
Command Reference
807
decptx
delprioritychar
808
Command Reference
Notes: 1. Double quotation marks around the date format string are mandatory. Field separators cannot contain any of the following: a-z, A-Z, and 0-9. The field separator should not be the same as the character delimiter or field delimiter in the DEL file format. A field separator is optional if the start and end positions of an element are unambiguous. Ambiguity can exist if (depending on the modifier) elements such as D, H, M, or S are used, because of the variable length of the entries. For time stamp formats, care must be taken to avoid ambiguity between the month and the minute descriptors, since they both use the letter M. A month field must be adjacent to other date fields. A minute field must be adjacent to other time fields. Following are some ambiguous time stamp formats:
"M" (could be a month, or a minute) "M:M" (Which is which?) "M:YYYY:M" (Both are interpreted as month.) "S:M:YYYY" (adjacent to both a time value and a date value)
In ambiguous cases, the utility will report an error message, and the operation will fail. Following are some unambiguous time stamp formats:
"M:YYYY" (Month) "S:M" (Minute) "M:YYYY:S:M" (Month....Minute) "M:H:YYYY:M:D" (Minute....Month)
Some characters, such as double quotation marks and back slashes, must be preceded by an escape character (for example, \). 2. The character must be specified in the code page of the source data. The character code point (instead of the character symbol), can be specified using the syntax xJJ or 0xJJ, where JJ is the hexadecimal representation of the code point. For example, to specify the # character as a column delimiter, use one of the following:
... modified by coldel# ... ... modified by coldel0x23 ... ... modified by coldelX23 ...
3. Delimiter restrictions for moving data lists restrictions that apply to the characters that can be used as delimiter overrides.
809
Present
Absent
Related reference: v db2Load API - Load data into a table in Administrative API Reference v Delimiter restrictions for moving data on page 826 v LOAD on page 557
810
Command Reference
generatedmissing
identityignore
identitymissing
lobsinfile
nodefaults
811
812
Command Reference
813
814
Command Reference
815
816
Command Reference
817
nochardel
Table 26. Valid file type modifiers for the import utility: IXF file format Modifier forcein Description Directs the utility to accept data despite code page mismatches, and to suppress translation between code pages. Fixed length target fields are checked to verify that they are large enough for the data. If nochecklengths is specified, no checking is done, and an attempt is made to import each row. indexixf Directs the utility to drop all indexes currently defined on the existing table, and to create new ones from the index definitions in the PC/IXF file. This option can only be used when the contents of a table are being replaced. It cannot be used with a view, or when a insert-column is specified.
818
Command Reference
nochecklengths
forcecreate
Table 27. IMPORT behavior when using codepage and usegraphiccodepage codepage=N Absent Present usegraphiccodepage Absent Absent IMPORT behavior All data in the file is assumed to be in the application code page. All data in the file is assumed to be in code page N. Warning: Graphic data will be corrupted when imported into the database if N is a single-byte code page. Absent Present Character data in the file is assumed to be in the application code page. Graphic data is assumed to be in the code page of the application graphic data. If the application code page is single-byte, then all data is assumed to be in the application code page. Warning: If the application code page is single-byte, graphic data will be corrupted when imported into the database, even if the database contains graphic columns. Present Present Character data is assumed to be in code page N. Graphic data is assumed to be in the graphic code page of N. If N is a single-byte or double-byte code page, then all data is assumed to be in code page N. Warning: Graphic data will be corrupted when imported into the database if N is a single-byte code page.
Notes: 1. The import utility does not issue a warning if an attempt is made to use unsupported file types with the MODIFIED BY option. If this is attempted, the import operation fails, and an error code is returned. 2. Double quotation marks around the date format string are mandatory. Field separators cannot contain any of the following: a-z, A-Z, and 0-9. The field separator should not be the same as the character delimiter or field delimiter in the DEL file format. A field separator is optional if the start and end positions of an element are unambiguous. Ambiguity can exist if (depending on the modifier) elements such as D, H, M, or S are used, because of the variable length of the entries.
Appendix C. File type modifiers
819
In ambiguous cases, the utility will report an error message, and the operation will fail. Following are some unambiguous time stamp formats:
"M:YYYY" (Month) "S:M" (Minute) "M:YYYY:S:M" (Month....Minute) "M:H:YYYY:M:D" (Minute....Month)
Some characters, such as double quotation marks and back slashes, must be preceded by an escape character (for example, \). 3. The character must be specified in the code page of the source data. The character code point (instead of the character symbol), can be specified using the syntax xJJ or 0xJJ, where JJ is the hexadecimal representation of the code point. For example, to specify the # character as a column delimiter, use one of the following:
... modified by coldel# ... ... modified by coldel0x23 ... ... modified by coldelX23 ...
4. Delimiter restrictions for moving data lists restrictions that apply to the characters that can be used as delimiter overrides. 5. The following file type modifers are not allowed when importing into a nickname: v indexixf v indexschema v dldelfiletype v nodefaults v usedefaults v no_type_idfiletype v generatedignore v generatedmissing v identityignore v identitymissing v lobsinfile 6. The WSF file format is not supported for XML columns. 7. The CREATE mode is not supported for XML columns. 8. All XML data must reside in XML files that are separate from the main data file. An XML Data Specifier (XDS) (or a NULL value) must exist for each XML column in the main data file. 9. XML documents are assumed to be in Unicode format or to contain a declaration tag that includes an encoding attribute, unless the XMLCHAR or XMLGRAPHIC file type modifier is specified. 10. Rows containing documents that are not well-formed will be rejected.
820
Command Reference
xmlchar
xmlgraphic
821
decptx
nochardel
822
Command Reference
823
Notes: 1. The export utility does not issue a warning if an attempt is made to use unsupported file types with the MODIFIED BY option. If this is attempted, the export operation fails, and an error code is returned. 2. Delimiter restrictions for moving data lists restrictions that apply to the characters that can be used as delimiter overrides. 3. The export utility normally writes v date data in YYYYMMDD format v char(date) data in YYYY-MM-DD format v time data in HH.MM.SS format v time stamp data in YYYY-MM-DD-HH. MM.SS.uuuuuu format Data contained in any datetime columns specified in the SELECT statement for the export operation will also be in these formats. 4. For time stamp formats, care must be taken to avoid ambiguity between the month and the minute descriptors, since they both use the letter M. A month field must be adjacent to other date fields. A minute field must be adjacent to other time fields. Following are some ambiguous time stamp formats:
"M" (could be a month, or a minute) "M:M" (Which is which?) "M:YYYY:M" (Both are interpreted as month.) "S:M:YYYY" (adjacent to both a time value and a date value)
In ambiguous cases, the utility will report an error message, and the operation will fail. Following are some unambiguous time stamp formats:
"M:YYYY" (Month) "S:M" (Minute) "M:YYYY:S:M" (Month....Minute) "M:H:YYYY:M:D" (Minute....Month)
5. These files can also be directed to a specific product by specifying an L for Lotus 1-2-3, or an S for Symphony in the filetype-mod parameter string. Only one value or product designator can be specified.
824
Command Reference
. The LOBSINFILE file type modifier must be specified in order to have LOB files generated. 11. The export utility appends a numeric identifier to each LOB file or XML file. The identifier is a 3 digit, 0 padded sequence value, starting at
.001
. After the 999th LOB file or XML file, the identifier will no longer be padded with zeroes (for example, the 1000th LOG file or XML file will have an extension of
.1000
. Following the numeric identifier is a three character type identifier representing the data type, either
.lob
or
.xml
. For example, a generated LOB file would have a name in the format
myfile.del.001.lob
. 12. It is possible to have the export utility export QDM instances that are not well-formed documents by specifying an XQuery. However, you will not be
Appendix C. File type modifiers
825
Default delimiters for data files in EBCDIC SBCS code pages are:
" (0x7F, double quotation mark; string delimiter) , (0x6B, comma; column delimiter)
The default decimal point for ASCII data files is 0x2e (period). The default decimal point for EBCDIC data files is 0x4B (period). If the code page of the server is different from the code page of the client, it is recommended that the hex representation of non-default delimiters be specified. For example,
db2 load from ... modified by chardel0x0C coldelX1e ...
The following information about support for double character delimiter recognition in DEL files applies to the export, import, and load utilities: v Character delimiters are permitted within the character-based fields of a DEL file. This applies to fields of type CHAR, VARCHAR, LONG VARCHAR, or CLOB (except when lobsinfile is specified). Any pair of character delimiters found between the enclosing character delimiters is imported or loaded into the database. For example,
"What a ""nice"" day!"
826
Command Reference
v In a DBCS environment, the pipe (|) character delimiter is not supported. Related reference: v File type modifiers for the export utility on page 821 v File type modifiers for the import utility on page 810 v File type modifiers for the load utility on page 799
827
828
Command Reference
Documentation feedback
We value your feedback on the DB2 documentation. If you have suggestions for how we can improve the DB2 documentation, send an e-mail to [email protected]. The DB2 documentation team reads all of your feedback, but cannot respond to you directly. Provide specific examples wherever possible so that we can better understand your concerns. If you are providing feedback on a specific topic or help file, include the topic title and URL. Do not use this e-mail address to contact DB2 Customer Support. If you have a DB2 technical issue that the documentation does not resolve, contact your local IBM service center for assistance.
Copyright IBM Corp. 1993, 2006
829
Related concepts: v Features of the DB2 Information Center in Online DB2 Information Center v Sample files in Samples Topics Related tasks: v Invoking command help from the command line processor on page 326 v Invoking message help from the command line processor on page 326 v Updating the DB2 Information Center installed on your computer or intranet server on page 835 Related reference: v DB2 technical library in hardcopy or PDF format on page 830
Administrative SQL Routines and SC10-4293 Views Call Level Interface Guide and Reference, Volume 1 Call Level Interface Guide and Reference, Volume 2 Command Reference Data Movement Utilities Guide and Reference Data Recovery and High Availability Guide and Reference Developing ADO.NET and OLE DB Applications Developing Embedded SQL Applications Developing SQL and External Routines SC10-4224 SC10-4225 SC10-4226 SC10-4227 SC10-4228 SC10-4230 SC10-4232 SC10-4373
830
Command Reference
Table 32. DB2 technical information (continued) Name Developing Java Applications Developing Perl and PHP Applications Getting Started with Database Application Development Form Number SC10-4233 SC10-4234 SC10-4252 Available in print Yes No Yes Yes
Getting started with DB2 GC10-4247 installation and administration on Linux and Windows Message Reference Volume 1 Message Reference Volume 2 Migration Guide Net Search Extender Administration and Users Guide Note: HTML for this document is not installed from the HTML documentation CD. Performance Guide Query Patroller Administration and Users Guide Quick Beginnings for DB2 Clients Quick Beginnings for DB2 Servers SC10-4238 SC10-4239 GC10-4237 SH12-6842
No No Yes Yes
Spatial Extender and Geodetic SC18-9749 Data Management Feature Users Guide and Reference SQL Guide SQL Reference, Volume 1 SQL Reference, Volume 2 System Monitor Guide and Reference Troubleshooting Guide Visual Explain Tutorial Whats New XML Extender Administration and Programming XML Guide XQuery Reference SC10-4248 SC10-4249 SC10-4250 SC10-4251 GC10-4240 SC10-4319 SC10-4253 SC18-9750 SC10-4254 SC18-9796
Table 33. DB2 Connect-specific technical information Name DB2 Connect Users Guide Quick Beginnings for DB2 Connect Personal Edition Form Number SC10-4229 GC10-4244 Available in print Yes Yes
831
Table 33. DB2 Connect-specific technical information (continued) Name Quick Beginnings for DB2 Connect Servers Form Number GC10-4243 Available in print Yes
Table 34. WebSphere Information Integration technical information Name Form Number Available in print Yes
WebSphere Information SC19-1020 Integration: Administration Guide for Federated Systems WebSphere Information Integration: ASNCLP Program Reference for Replication and Event Publishing WebSphere Information Integration: Configuration Guide for Federated Data Sources WebSphere Information Integration: SQL Replication Guide and Reference SC19-1018
Yes
SC19-1034
No
SC19-1030
Yes
Note: The DB2 Release Notes provide additional information specific to your products release and fix pack level. For more information, see the related links. Related concepts: v Overview of the DB2 technical information on page 829 v About the Release Notes in Release notes Related tasks: v Ordering printed DB2 books on page 832
832
Command Reference
To order printed DB2 books: v To find out whether you can order printed DB2 books online in your country or region, check the IBM Publications Center at http://www.ibm.com/shop/ publications/order. You must select a country, region, or language to access publication ordering information and then follow the ordering instructions for your location. v To order printed DB2 books from your local IBM representative: Locate the contact information for your local representative from one of the following Web sites: - The IBM directory of world wide contacts at www.ibm.com/planetwide - The IBM Publications Web site at http://www.ibm.com/shop/ publications/order. You will need to select your country, region, or language to the access appropriate publications home page for your location. From this page, follow the About this site link. When you call, specify that you want to order a DB2 publication. Provide your representative with the titles and form numbers of the books that you want to order. Related concepts: v Overview of the DB2 technical information on page 829 Related reference: v DB2 technical library in hardcopy or PDF format on page 830
where sqlstate represents a valid five-digit SQL state and class code represents the first two digits of the SQL state. For example, ? 08003 displays help for the 08003 SQL state, and ? 08 displays help for the 08 class code. Related tasks: v Invoking command help from the command line processor on page 326 v Invoking message help from the command line processor on page 326
833
Related tasks: v Setting up access to DB2 contextual help and documentation in Administration Guide: Implementation
834
Command Reference
Updating the DB2 Information Center installed on your computer or intranet server
If you have a locally-installed DB2 Information Center, updated topics can be available for download. The 'Last updated' value found at the bottom of most topics indicates the current level for that topic. To determine if there is an update available for the entire DB2 Information Center, look for the 'Last updated' value on the Information Center home page. Compare the value in your locally installed home page to the date of the most recent downloadable update at http://www.ibm.com/software/data/db2/udb/support/ icupdate.html. You can then update your locally-installed Information Center if a more recent downloadable update is available. Updating your locally-installed DB2 Information Center requires that you: 1. Stop the DB2 Information Center on your computer, and restart the Information Center in stand-alone mode. Running the Information Center in stand-alone mode prevents other users on your network from accessing the Information Center, and allows you to download and apply updates. 2. Use the Update feature to determine if update packages are available from IBM. Note: Updates are also available on CD. For details on how to configure your Information Center to install updates from CD, see the related links. If update packages are available, use the Update feature to download the packages. (The Update feature is only available in stand-alone mode.) 3. Stop the stand-alone Information Center, and restart the DB2 Information Center service on your computer. Procedure: To update the DB2 Information Center installed on your computer or intranet server: 1. Stop the DB2 Information Center service. v On Windows, click Start Control Panel Administrative Tools Services. Then right-click on DB2 Information Center service and select Stop. v On Linux, enter the following command:
/etc/init.d/db2icdv9 stop
2. Start the Information Center in stand-alone mode. v On Windows: a. Open a command window. b. Navigate to the path where the Information Center is installed. By default, the DB2 Information Center is installed in the C:\Program Files\IBM\DB2 Information Center\Version 9 directory. c. Run the help_start.bat file using the fully qualified path for the DB2 Information Center:
<DB2 Information Center dir>\doc\bin\help_start.bat
v On Linux: a. Navigate to the path where the Information Center is installed. By default, the DB2 Information Center is installed in the /opt/ibm/db2ic/V9 directory.
Appendix D. DB2 Database technical information
835
b. Run the help_start script using the fully qualified path for the DB2 Information Center:
<DB2 Information Center dir>/doc/bin/help_start
The systems default Web browser launches to display the stand-alone Information Center. 3. Click the Update button ( ). On the right hand panel of the Information Center, click Find Updates. A list of updates for existing documentation displays. 4. To initiate the download process, check the selections you want to download, then click Install Updates. 5. After the download and installation process has completed, click Finish. 6. Stop the stand-alone Information Center. v On Windows, run the help_end.bat file using the fully qualified path for the DB2 Information Center:
<DB2 Information Center dir>\doc\bin\help_end.bat
Note: The help_end batch file contains the commands required to safely terminate the processes that were started with the help_start batch file. Do not use Ctrl-C or any other method to terminate help_start.bat. v On Linux, run the help_end script using the fully qualified path for the DB2 Information Center:
<DB2 Information Center dir>/doc/bin/help_end
Note: The help_end script contains the commands required to safely terminate the processes that were started with the help_start script. Do not use any other method to terminate the help_start script. 7. Restart the DB2 Information Center service. v On Windows, click Start Control Panel Administrative Tools Services. Then right-click on DB2 Information Center service and select Start. v On Linux, enter the following command:
/etc/init.d/db2icdv9 start
The updated DB2 Information Center displays the new and updated topics. Related concepts: v DB2 Information Center installation options in Quick Beginnings for DB2 Servers Related tasks: v Installing the DB2 Information Center using the DB2 Setup wizard (Linux) in Quick Beginnings for DB2 Servers v Installing the DB2 Information Center using the DB2 Setup wizard (Windows) in Quick Beginnings for DB2 Servers
DB2 tutorials
The DB2 tutorials help you learn about various aspects of DB2 products. Lessons provide step-by-step instructions. Before you begin:
836
Command Reference
You can view the XHTML version of the tutorial from the Information Center at http://publib.boulder.ibm.com/infocenter/db2help/. Some lessons use sample data or code. See the tutorial for a description of any prerequisites for its specific tasks. DB2 tutorials: To view the tutorial, click on the title. Native XML data store Set up a DB2 database to store XML data and to perform basic operations with the native XML data store. Visual Explain Tutorial Analyze, optimize, and tune SQL statements for better performance using Visual Explain. Related concepts: v Visual Explain overview in Administration Guide: Implementation
837
distribute, display or make derivative work of these Publications, or any portion thereof, without the express consent of IBM. Commercial use: You may reproduce, distribute and display these Publications solely within your enterprise provided that all proprietary notices are preserved. You may not make derivative works of these Publications, or reproduce, distribute or display these Publications or any portion thereof outside your enterprise, without the express consent of IBM. Except as expressly granted in this permission, no other permissions, licenses or rights are granted, either express or implied, to the Publications or any information, data, software or other intellectual property contained therein. IBM reserves the right to withdraw the permissions granted herein whenever, in its discretion, the use of the Publications is detrimental to its interest or, as determined by IBM, the above instructions are not being properly followed. You may not download, export or re-export this information except in full compliance with all applicable laws and regulations, including all United States export laws and regulations. IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESE PUBLICATIONS. THE PUBLICATIONS ARE PROVIDED AS-IS AND WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
838
Command Reference
Appendix E. Notices
IBM may not offer the products, services, or features discussed in this document in all countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the users responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country/region or send inquiries, in writing, to: IBM World Trade Asia Corporation Licensing 2-31 Roppongi 3-chome, Minato-ku Tokyo 106, Japan The following paragraph does not apply to the United Kingdom or any other country/region where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions; therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product, and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.
839
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information that has been exchanged, should contact: IBM Canada Limited Office of the Lab Director 8200 Warden Avenue Markham, Ontario L6G 1C7 CANADA Such information may be available, subject to appropriate terms and conditions, including in some cases payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement, or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems, and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements, or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. All statements regarding IBMs future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. This information may contain examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious, and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information may contain sample application programs, in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. Each copy or any portion of these sample programs or any derivative work must include a copyright notice as follows:
840
Command Reference
(your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs. Copyright IBM Corp. _enter the year or years_. All rights reserved.
Trademarks
Company, product, or service names identified in the documents of the DB2 Version 9 documentation library may be trademarks or service marks of International Business Machines Corporation or other companies. Information on the trademarks of IBM Corporation in the United States, other countries, or both is located at http://www.ibm.com/legal/copytrade.shtml. The following terms are trademarks or registered trademarks of other companies and have been used in at least one of the documents in the DB2 documentation library: Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Itanium, Pentium, and Xeon are trademarks of Intel Corporation in the United States, other countries, or both. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
Appendix E. Notices
841
842
Command Reference
843
844
Command Reference
A
abnormal termination restart command 673 access path optimizing 702 action precompile/bind option 355, 583 ACTIVATE DATABASE command 330 ADD CONTACT command 333 ADD CONTACTGROUP command 335 Add Database Partition Server to an Instance command 178 ADD DBPARTITIONNUM command 336 ADD XMLSCHEMA DOCUMENT command syntax 339 admin configuration file 441 network parameter values 756 resetting to default 663 sample 441 administration server configuration 441 creating 13 dropping 13 advisors db2advis 23 Design Advisor 23 anyorder file type modifier 557 APPC (Advanced Program-to-Program Communication) node uncataloging 752 ARCHIVE LOG command 341 ASC import file type 494 ATTACH command 344 Audit Facility Administrator Tool command 29 Auto-start instance command 118 AUTOCONFIGURE command 346 Autostart DAS command 3
C
CALL statement run through the CLP 785 case sensitivity commands 321 in naming conventions 797 CATALOG DATABASE command syntax 372 CATALOG DCS DATABASE command 375 CATALOG LDAP DATABASE command 377 CATALOG LDAP NODE command 381 CATALOG LOCAL NODE command 382 CATALOG NAMED PIPE NODE command 384 CATALOG ODBC DATA SOURCE command 386 CATALOG TCP/IP NODE command 387 cataloging databases 372 host database 375 CCSIDG precompile/bind option 355, 583 CCSIDM precompile/bind option 355, 583 CCSIDS precompile/bind option 355, 583 CHANGE DATABASE COMMENT command 390 change database partition server configuration command 176 CHANGE ISOLATION LEVEL command 392 Change search path command 56 chardel file type modifier export 433 import 494 load 557 charsub precompile/bind option 355, 583 check backup command 57 check incremental restore image sequence command 63 CLI (call level interface) configuration 451 CLI/ODBC static package binding tool command 45 CLIPKG precompile/bind option 355 CLOSE statement run through the CLP 785 CLP (command line processor) command syntax 311 quitting 618 terminating 744
B
BACKUP DATABASE command 349 Backup Services APIs (XBSA) 349 Benchmark Tool command 34 binary files 286 binarynumerics file type modifier 557 BIND command syntax 355 Bind File Description Tool command 43 bindfile precompile option 583 binding errors 395 implicitly created schema 355, 583 Copyright IBM Corp. 1993, 2006
845
commands (continued) dasmigr 6 dasupdt 8 db2 311 DB2 JDBC Package Binder Utility 138 DB2 SQLJ Profile Binder 252 DB2 SQLJ Translator 307 db2_deinstall 10 db2_install 11 db2_recon_aid 232 db2admin 13 db2adutl 15 db2advis 23 db2audit 29 db2batch 34 db2bfd 43 db2cap 45 db2cc 49 db2cfexp 51 db2cfimp 53 db2chgpath 56 db2ckbkp 57 db2ckmig 61 db2ckrst 63 db2cli 65 db2cmd 66 db2dart 67 db2daslevel 71 db2dclgn 72 db2diag 75 db2drdat 86 db2empfa 90 db2eva 91 db2evmon 93 db2evtbl 95 db2exmig 99 db2expln 100 db2extsec 105 db2flsn 106 db2fm 108 db2fs 110 db2gcf 111 db2gov 113 db2govlg 115 db2gpmap 116 db2hc 117 db2iauto 118 db2iclus 119 db2icrt 122 db2idrop 125 db2ilist 127 db2imigr 128 db2inidb 130 db2inspf 132 db2isetup 133 db2iupdt 135 db2jdbcbind 138 db2ldcfg 140 db2level 141 db2licm 142 db2listvolumes 144 db2logsforrfwd 145 db2look 146 db2ls 155 db2move 157 db2mqlsn 165
commands (continued) db2mscs 169 db2mtrk 173 db2nchg 176 db2ncrt 178 db2ndrop 180 db2osconf 181 db2pd 184 db2pdcfg 223 db2perfc 226 db2perfi 228 db2perfr 229 db2rbind 230 db2relocatedb 235 db2sampl 242 db2set 245 db2setup 248 db2sql92 249 db2sqljbind 252 db2sqljcustomize 259 db2sqljprint 270 db2start 271 db2stop 272 db2support 273 db2swtch 278 db2sync 279 db2systray 280 db2tapemgr 282 db2tbst 285 db2trc 286 db2uiddl 290 db2undgp 291 db2unins 292 db2untag 294 db2xprt 295 DEACTIVATE DATABASE 411 DECOMPOSE XML DOCUMENT 413 DEREGISTER 415 DESCRIBE 417 DETACH 423 disable_MQFunctions 299 doce_deinstall 296 doce_install 297 DROP CONTACT 424 DROP CONTACTGROUP 425 DROP DATABASE 426 DROP DBPARTITIONNUM VERIFY 428 DROP TOOLS CATALOG 429 ECHO 431 EDIT 432 enable_MQFunctions 301 EXPORT 433 FORCE APPLICATION 439 GET ADMIN CONFIGURATION 441 GET ALERT CONFIGURATION 443 GET AUTHORIZATIONS 449 GET CLI CONFIGURATION 451 GET CONNECTION STATE 453 GET CONTACTGROUP 454 GET CONTACTGROUPS 455 GET CONTACTS 456 GET DATABASE CONFIGURATION 457 GET DATABASE MANAGER CONFIGURATION 463
commands (continued) GET DATABASE MANAGER MONITOR SWITCHES 468 GET DESCRIPTION FOR HEALTH INDICATOR 471 GET HEALTH NOTIFICATION CONTACT LIST 473 GET HEALTH SNAPSHOT 474 GET INSTANCE 477 GET MONITOR SWITCHES 478 GET RECOMMENDATIONS 481 GET ROUTINE 485 GET SNAPSHOT 487 HELP 492 HISTORY 493 IMPORT 494 INITIALIZE TAPE 511 INSPECT 512 installFixPack 303 invoking help 326 LIST ACTIVE DATABASES 518 LIST APPLICATIONS 520 LIST COMMAND OPTIONS 522 LIST DATABASE DIRECTORY 523 LIST DATABASE PARTITION GROUPS 526 LIST DBPARTITIONNUMS 528 LIST DCS APPLICATIONS 529 LIST DCS DIRECTORY 531 LIST DRDA INDOUBT TRANSACTIONS 533 LIST HISTORY 535 LIST INDOUBT TRANSACTIONS 538 LIST NODE DIRECTORY 541 LIST ODBC DATA SOURCES 544 LIST PACKAGES/TABLES 545 LIST TABLESPACE CONTAINERS 548 LIST TABLESPACES 550 LIST UTILITIES 555 LOAD 557 LOAD QUERY 576 Microsoft Cluster Server 119 MIGRATE DATABASE 579 MQ Listener 165 PING 581 PRECOMPILE 583 PRUNE HISTORY/LOGFILE 607 PUT ROUTINE 609 QUERY CLIENT 611 QUIESCE 612 QUIESCE TABLESPACES FOR TABLE 615 QUIT 618 REBIND 619 RECONCILE 623 RECOVER DATABASE 627 redirecting output 321 REDISTRIBUTE DATABASE PARTITION GROUP 633 REFRESH LDAP 636 REGISTER 637 REORG INDEXES/TABLE 644 REORGCHK 654 RESET ADMIN CONFIGURATION 663
846
Command Reference
commands (continued) RESET ALERT CONFIGURATION 665 RESET DATABASE CONFIGURATION 667 RESET DATABASE MANAGER CONFIGURATION 669 RESET MONITOR 671 RESTART DATABASE 673 RESTORE DATABASE 675 REWIND TAPE 690 ROLLFORWARD DATABASE 691 RUNCMD 701 RUNSTATS 702 SET CLIENT 716 SET RUNTIME DEGREE 719 SET TABLESPACE CONTAINERS 721 SET TAPE POSITION 723 SET UTIL_IMPACT_PRIORITY 724 SET WRITE 726 setup 305 sqlj 307 START DATABASE MANAGER 728 START HADR 734 STOP DATABASE MANAGER 736 STOP HADR 739 TAKEOVER HADR 741 TERMINATE 744 UNCATALOG DATABASE 745 UNCATALOG DCS DATABASE 747 UNCATALOG LDAP DATABASE 749 UNCATALOG LDAP NODE 751 UNCATALOG NODE 752 UNCATALOG ODBC DATA SOURCE 753 UNQUIESCE 754 UPDATE ADMIN CONFIGURATION 756 UPDATE ALERT CONFIGURATION 758 UPDATE ALTERNATE SERVER FOR DATABASE 762 UPDATE ALTERNATE SERVER FOR LDAP DATABASE 764 UPDATE CLI CONFIGURATION 766 UPDATE COMMAND OPTIONS 768 UPDATE CONTACT 770 UPDATE CONTACTGROUP 771 UPDATE DATABASE CONFIGURATION 772 UPDATE DATABASE MANAGER CONFIGURATION 775 UPDATE HEALTH NOTIFICATION CONTACT LIST 777 UPDATE HISTORY FILE 778 UPDATE LDAP NODE 780 UPDATE MONITOR SWITCHES 782 COMPLETE XMLSCHEMA command syntax 394 compound file type modifier 494 configurations administration resetting to default 663 sample 441
configurations (continued) CLI, sample 451 database resetting to default 667 sample 457 updating 772 database manager, sample 463 Configure DB2 database for problem determination behavior command 223 configure LDAP environment command 140 connect precompile option 583 CONNECT statement run through the CLP 785 connectivity configuration export tool command 51 connectivity configuration import tool command 53 contacting IBM 843 continuation character command line processor (CLP) 321 Control Center starting 49 Control DB2 instance command 111 CREATE DATABASE command description 395 create instance command 122 create sample database command 242 CREATE TOOLS CATALOG command 408 cursor stability (CS) changing 392
D
DAS (DB2 Administration Server) configuration 441 creating 13 dropping 13 dasauto command 3 dascrt command 4 dasdrop command 5 dasmigr command 6 dasupdt command 8 data fragmentation, eliminating, by table reorganization 644 data integrity maintaining, with isolation levels 392 data skew redistributing data in database partition group 633 database analysis and reporting tool command 67 database configuration network parameter values 772 resetting to default 667 sample 457 updating 772 Database Connection Services (DCS) directory removing entries 747 database directories changing comments 390 description 523 sample content 523
database manager accessing from command prompt 1 instances 477 monitor switches 468, 478 starting 728 statistics 487 stopping 736 system commands 1 database manager configuration GET DATABASE MANAGER CONFIGURATION command 463 sample file 463 database monitor description 782 database movement tool command 157 database premigration tool command 61 database system monitor GET DATABASE MANAGER MONITOR SWITCHES command 468 GET MONITOR SWITCHES command 478 GET SNAPSHOT 487 RESET MONITOR command 671 UPDATE MONITOR SWITCHES command 782 databases backup history file 607 cataloging 372 changing comments in directory 390 checking authorizations 449 deleting, ensuring recovery with log files 426 dropping 426 exporting table to a file 433 home directory entry 523 importing file to table 494 indirect directory entry 523 information 487 loading file to table 557 migrating 579 monitor resetting 671 recovering 691 remote directory entry 523 removing entries (uncataloging) 745 removing host DCS entries 747 reorganizing 654 restarting 673 restoring (rebuilding) 675 rollforward recovery 691 statistics 702 dateformat file type modifier 494, 557 datesiso file type modifier 433, 494, 557 DATETIME precompile/bind option 355, 583 db2 CMD description 311 DB2 Administration Server (DAS) creating 13 dropping 13 DB2 administration server command creating 4 DB2 Administration Server command 13 db2 command 311 DB2 Connect supported connections 375 Index
847
DB2 database drive map command 88 DB2 Fault Monitor command 108 DB2 Governor command 113 DB2 Governor Log Query command 115 DB2 Index Advisor 23 DB2 Information Center updating 835 versions 833 viewing in different languages 834 DB2 Interactive CLI command 65 DB2 JDBC Package Binder Utility command 138 DB2 Profile Registry command 245 DB2 SQLJ Profile Binder command 252 DB2 SQLJ Profile Customizer command 259 DB2 SQLJ Profile Printer command 270 DB2 SQLJ Translator command 307 DB2 Statistics and DDL Extraction Tool command 146 db2_deinstall command 10 db2_install command 11 db2_recon_aid command 232 db2admin command 13 db2adutl command 15 db2advis 23, 481 db2audit command 29 db2batch command 34 db2bfd command 43 db2cap command 45 db2cc command 49 db2cfexp command 51 db2cfimp command 53 db2chgpath command 56 db2ckbkp command 57 db2ckmig command 61 db2ckrst command 63 db2cli command 65 db2cmd command 66 db2dart command 67 db2daslevel command 71 db2dclgn declaration generator syntax 72 db2diag command 75 db2diag.log analysis tool command 75 db2drdat command 86 db2drvmp command 88 db2empfa command 90 db2eva command 91 db2evmon command 93 db2evtbl command 95 db2exfmt tool 97 db2exmig command 99 db2expln command 100 db2extsec command 105 db2flsn command 106 db2fm command 108 db2fs command 110 db2gcf command 111 db2gov command 113 db2govlg command 115 db2gpmap command 116 db2hc command 117 db2iauto command 118 db2iclus command 119 db2icrt command 122 db2idrop command 125
db2ilist command 127 db2imigr command 128 db2inidb command 130 db2inspf command 132 db2isetup command 133 db2iupdt command 135 db2jdbcbind command 138 db2ldcfg command 140 db2level command 141 db2licm command 142 db2listvolumes command 144 db2logsforrfwd command 145 db2look command 146 db2ls command 155 db2move command 157 db2mqlsn command 165 db2mscs command 169 db2mtrk command 173 db2nchg command 176 db2ncrt command 178 db2ndrop command 180 DB2OPTIONS 312 db2osconf command 181 db2pd command 184 db2pdcfg command 223 db2perfc command 226 db2perfi command 228 db2perfr command 229 db2rbind command 230 db2relocatedb command 235 db2rfpen command 240 db2rspgn response file generator 241 db2sampl command 242 db2set command 245 db2setup command 248 db2sql92 command 249 db2sqljbind command 252 db2sqljcustomize command 259 db2sqljprint command 270 db2start command 271, 728 db2stop command 272, 736 db2support command 273 db2swtch command 278 db2sync command 279 db2systray command 280 db2tapemgr command 282 db2tbst command 285 db2trc command 286 db2uiddl command 290 db2undgp command 291 db2unins command 292 db2untag command 294 db2xprt command 295 DCLGEN command db2dclgn declaration generator 72 DEACTIVATE DATABASE command 411 dec precompile/bind option 355, 583 decdel precompile/bind option 355, 583 Declaration Generator command 72 DECLARE CURSOR statement run through the CLP 785 DECOMPOSE XML DOCUMENT command description 413 decplusblank file type modifier 433, 494, 557
decpt file type modifier 433, 494, 557 default configuration admin, resetting to 663 database, resetting to 667 deferred_prepare precompile option 583 degree precompile/bind option 355, 583 delimiter restrictions moving data 826 delprioritychar file type modifier 494, 557 DEREGISTER command 415 DESCRIBE command 417 Design Advisor 23, 481 DETACH command 423 directories database changing comments 390 Database Connection Services (DCS), uncataloging entries 747 deleting entries 752 node removing entries 752 system database, removing 745 uncataloging 745 disable_MQFunctions command 299 disconnect precompile option 583 disconnecting command line processor front-end and back-end processes 744 Display GUIDs for all disk volumes command 144 dldel file type modifier 433, 494, 557 doce_deinstall command 296 doce_install command 297 documentation 829, 830 terms and conditions of use 837 DRDA trace command 86 DROP CONTACT command 424 DROP CONTACTGROUP command 425 DROP DATABASE command syntax 426 Drop Database Partition Server from an Instance command 180 DROP DBPARTITIONNUM VERIFY command 428 DROP TOOLS CATALOG command 429 dumpfile file type modifier 557 dumping a trace to file 286 DYNAMICRULES precompile/bind option BIND command 355 PRECOMPILE command 583
E
ECHO command 431 EDIT command 432 Enable Multipage File Allocation command 90 enable_MQFunctions command 301 environment variables DB2OPTIONS 312 error messages database configuration file 457 dropping remote databases 426
848
Command Reference
error messages (continued) invalid checksum database configuration file 667, 772 database manager configuration file 663 Event Analyzer command 91 Event Monitor Productivity Tool command 93 exit codes (CLP) list 320 explain bind option 355, 583 explain tables formatting tool for data in 97 explsnap precompile/bind option 355, 583 EXPORT command 433 export utility file type modifiers 821 exporting database tables files 433 exporting data file type modifiers for 433
F
fastparse file type modifier 557 federated precompile/bind option 355, 583 FETCH statement run through the CLP 785 file formats exporting table to file 433 importing file to table 494 file type modifiers export utility 821 EXPORT utility 433 IMPORT command 494 import utility 810 LOAD command 557 load utility 799 Find Log Sequence Number command 106 First Steps 110 FORCE APPLICATION command 439 forcein file type modifier 494, 557 Format inspect results command 132 Format trap file command 295 funcpath precompile/bind option 355, 583
GET CLI CONFIGURATION command 451 GET CONNECTION STATE command 453 GET CONTACTGROUP command 454 GET CONTACTGROUPS command 455 GET CONTACTS command 456 GET DATABASE CONFIGURATION command 457 GET DATABASE MANAGER CONFIGURATION command 463 GET DATABASE MANAGER MONITOR SWITCHES command 468 GET DESCRIPTION FOR HEALTH INDICATOR command 471 Get distribution map command 116 GET HEALTH NOTIFICATION CONTACT LIST command 473 GET HEALTH SNAPSHOT command 474 GET INSTANCE command 477 GET MONITOR SWITCHES command 478 GET RECOMMENDATIONS command 481 GET ROUTINE command 485 GET SNAPSHOT command 487 effect on UPDATE MONITOR SWITCHES 782 Get Tablespace State command 285 grant bind option 355 grantgroup bind option 355 grantuser bind option 355
indexes (continued) statistics RUNSTATS command 702 indexfreespace file type modifier 557 indexixf file type modifier 494 indexschema file type modifier 494 indoubt transaction field 538 Information Center updating 835 versions 833 viewing in different languages 834 Initialize a Mirrored Database command 130 INITIALIZE TAPE command 511 insert precompile/bind option 355, 583 INSPECT command 512 install DB2 command 248 Install DB2 command 305 Install DB2 Information Center command 297 Install DB2 product command 11 installFixPack command 303 invoking 326 command help 326 IPX/SPX node uncataloging 752 isolation levels CHANGE ISOLATION LEVEL command 392 isolation precompile/bind option 355, 583
J
JDBC Package Binder Utility command 138
H
help displaying 834 for commands 326 for messages 326 for SQL statements 833 HELP command 492 HISTORY command 493 host systems cataloging databases 375 connections supported by DB2 Connect 375 removing DCS catalog entries 747
K
keepblanks file type modifier 494, 557
L
LANGLEVEL precompile option 583 SQL92E 583 level precompile option 583 License Management Tool command 142 line continuation character command line processor (CLP) 321 LIST ACTIVE DATABASES command 518 LIST APPLICATIONS command 520 LIST COMMAND OPTIONS command 522 LIST DATABASE DIRECTORY command 523 LIST DATABASE PARTITION GROUPS command 526 LIST DBPARTITIONNUMS command 528 LIST DCS APPLICATIONS command 529 LIST DCS DIRECTORY command 531 LIST DRDA INDOUBT TRANSACTIONS command 533 LIST HISTORY command 535 Index
I
identityignore 494 identityignore file type modifier 557 identitymissing file type modifier 494, 557 identityoverride file type modifier 557 implicit connection 321 implieddecimal file type modifier 494, 557 IMPORT command 494 import utility file type modifiers 810 importing data 494 indexes REORGCHK command 654
G
Generate Event Monitor Target Table Definitions command 95 generatedignore file type modifier 494, 557 generatedmissing file type modifier 494, 557 generatedoverride file type modifier 557 generic precompile/bind option 355, 583 GET ADMIN CONFIGURATION command 441 GET ALERT CONFIGURATION command 443 GET AUTHORIZATIONS command 449
849
LIST INDOUBT TRANSACTIONS command 538 List installed DB2 products and features command 155 List Instances command 127 List Logs Required for Rollforward Recovery command 145 LIST NODE DIRECTORY command 541 LIST ODBC DATA SOURCES command 544 LIST PACKAGES command 545 LIST PACKAGES/TABLES command 545 LIST TABLES command 545 LIST TABLESPACE CONTAINERS command 548 LIST TABLESPACES command 550 LIST UTILITIES command 555 LOAD command 557 LOAD QUERY command 576 load utility file type modifiers 799 temporary files 557 loading data file to database table 557 file type modifiers for 557 lobsinfile file type modifier 433, 494, 557 locks resetting maximum to default 667 logs listing during roll forward 691 longerror precompile option 583
N
naming conventions database manager objects 797 NetBIOS nodes uncataloging 752 no commit (NC) 392 nochecklengths file type modifier 494, 557 node directories, removing entries 752 nodefaults file type modifier 494 nodes SOCKS 387 nodoubledel file type modifier 433, 494, 557 noeofchar file type modifier 494, 557 noheader file type modifier 557 NOLINEMACRO precompile option 583 norowwarnings file type modifier 557 notices 839 notypeid file type modifier 494 NULL string 321 NULL value command line processor representation 321 nullindchar file type modifier 494, 557
preprocessor precompile option 583 printed books ordering 832 privileges database granted when creating 395 direct 449 indirect 449 report 449 Problem Analysis and Environment Collection Tool command 273 problem determination online information 837 tutorials 837 PRUNE HISTORY/LOGFILE command 607 PUT ROUTINE command 609
Q
qualifier precompile/bind option 355, 583 QUERY CLIENT command 611 queryopt precompile/bind option BIND command 355 PRECOMPILE command 583 QUIESCE command 612 QUIESCE TABLESPACES FOR TABLE command 615 quiesce, phantom 615 QUIT command 618
O
Open DB2 Command Window command 66 OPEN statement run through the CLP 785 optimization REORG INDEXES/TABLE command 644 optlevel precompile option 583 ordering DB2 books 832 output precompile option 583 owner precompile/bind option 355, 583
M
Manage log files on tape command 282 Memory Tracker command 173 message help invoking 326 messages accessing help 311 messages precompile/bind option 355, 583 Microsoft Cluster Server command 119 MIGRATE DATABASE command 579 Migrate explain tables command 99 Migrate Instance command 128 Migrate the DB2 Administration Server command 6 modifiers file type EXPORT command 433 export utility 821 IMPORT command 494 import utility 810 LOAD command 557 load utility 799 Monitor and troubleshoot DB2 database command 184 monitoring databases 468, 478 moving data between databases 494 delimiter restrictions 826 MQ Listener command 165
R
read stability (RS) changing 392 Rebind all Packages command 230 REBIND command 619 reclen file type modifier 494 loading 557 RECONCILE command syntax 623 Reconcile Multiple Tables command 232 RECOVER DATABASE command 627 recovery database 675 with roll forward 691 without roll forward 675 REDISTRIBUTE DATABASE PARTITION GROUP command 633 REFRESH LDAP command 636 REGISTER command 637 REGISTER XMLSCHEMA command syntax 640 REGISTER XSROBJECT command syntax 642 Release Container Tag command 294 release precompile/bind option 355, 583 Relocate Database command 235 Remove a DB2 Administration Server command 5 Remove Instance command 125 REORG TABLE command 644 REORGCHK command 654 repeatable read (RR) changing 392
P
packages recreating 619 packages precompile option 583 packeddecimal file type modifier 557 pagefreespace file type modifier 557 passwords changing through CONNECT 785 changing with ATTACH command 344 performance tuning by reorganizing tables 644 REORGCHK command 654 Performance Counters Registration Utility command 228 Performance Monitor Registration Tool command 229 phantom quiesce 615 PING command 581 PRECOMPILE command 583 PREP command 583 Prepare Unique Index Conversion to V5 Semantics command 290
850
Command Reference
RESET ADMIN CONFIGURATION command 663 RESET ALERT CONFIGURATION command 665 RESET DATABASE CONFIGURATION command 667 RESET DATABASE MANAGER CONFIGURATION command 669 Reset Database Performance Values command 226 RESET MONITOR command 671 Reset rollforward pending state command 240 response files generator db2rspgn 241 RESTART DATABASE command 673 RESTORE DATABASE command 675 restoring earlier versions of DB2 databases 675 RESTRICTIVE clause of CREATE DATABASE statement 395 return codes command line processor (CLP) 320 Revoke Execute Privilege command 291 REWIND TAPE command 690 ROLLFORWARD DATABASE command 691 RUNCMD command 701 RUNSTATS command syntax 702
S
schemas in new databases 395 SELECT statement run through the CLP 785 SELECT statements in EXPORT command 433 SET CLIENT command 716 Set permissions for DB2 objects command 105 SET RUNTIME DEGREE command 719 SET TABLESPACE CONTAINERS command 721 SET TAPE POSITION command 723 Set Up Windows Failover utility command 169 SET UTIL_IMPACT_PRIORITY command 724 SET WRITE command 726 setup command 305 show current DAS level command 71 Show DB2 Service Level command 141 SIGALRM signal starting database manager 728 SIGINT signal starting database manager 728 SOCKS node parameter 387 special characters in commands 321 SQL and XQuery Explain Command 100 SQL statements accessing help 311 displaying help 833
SQL statements (continued) using command line with 785 SQL92-compliant SQL statement processor command 249 sqlca precompile option 583 sqlerror precompile/bind option 355, 583 sqlflag precompile option 583 sqlj command 307 SQLJ Profile Binder command 252 SQLJ Translator command 307 sqlrules precompile option 583 sqlwarn precompile/bind option 355, 583 Start Control Center command 49 START DATABASE MANAGER command 728 Start DB2 command 271 Start DB2 Synchronizer command 279 Start DB2 system tray command 280 START HADR command 734 Start Health Center command 117 Start Instance Creation Interface command 133 starting DB2 db2start command 271 statistics database 702 database manager 487 reorganizing indexes 654 REORGCHK 654 STOP DATABASE MANAGER command 736 Stop DB2 command 272 STOP HADR command 739 stopping DB2 db2stop command 272 storage physical 644 strdel precompile/bind option 355, 583 striptblanks file type modifier 494, 557 striptnulls file type modifier 494, 557 subtableconvert file type modifier 557 Switch default DB2 copy command 278 syncpoint precompile option 583 syntax description 793 for command line processor SQL statements 785 system commands overview 1 system database directory uncataloging 745
tables (continued) statistics description 702 TAKEOVER HADR command 741 tape backup 349 target precompile option 583 TCP/IP node uncataloging 752 temporary files LOAD command 557 TERMINATE command 744 termination abnormal 673 command line processor back-end process 744 normal 736 terms and conditions use of publications 837 text precompile/bind option 355, 583 timeformat file type modifier 494, 557 timestampformat file type modifier 494, 557 totalfreespace file type modifier 557 Trace command 286 traces activating 286 transform group precompile/bind option 355, 583 troubleshooting online information 837 tutorials 837 true type font requirement for command line processor 321 TSM archived images 15 tutorials troubleshooting and problem determination 837 Visual Explain 836
U
UNCATALOG DATABASE command 745 UNCATALOG DCS DATABASE command 747 UNCATALOG LDAP DATABASE command 749 UNCATALOG LDAP NODE command 751 UNCATALOG NODE command 752 UNCATALOG ODBC DATA SOURCE command 753 uncataloging database entries 745 host DCS database entries 747 system database directory 745 uncommitted reads (UR) changing 392 Uninstall DB2 Information Center command 296 Uninstall DB2 products command 292 Uninstall DB2 products or features command 10 UNQUIESCE command 754
T
tables exporting to files 433 importing files 494 loading files to 557 reorganization determining if required 654 REORG INDEXES/TABLE command 644
Index
851
UPDATE ADMIN CONFIGURATION command 756 UPDATE ALERT CONFIGURATION command 758 UPDATE ALTERNATE SERVER FOR DATABASE command 762 UPDATE ALTERNATE SERVER FOR LDAP DATABASE command 764 UPDATE CLI CONFIGURATION command 766 UPDATE COMMAND OPTIONS command 768 UPDATE CONTACT command 770 UPDATE CONTACTGROUP command 771 Update DAS command 8 UPDATE DATABASE CONFIGURATION command 772 UPDATE DATABASE MANAGER CONFIGURATION command 775 UPDATE HEALTH NOTIFICATION CONTACT LIST command 777 UPDATE HISTORY FILE command 778 Update installed DB2 products command 303 Update Instances command 135 UPDATE LDAP NODE command 780 UPDATE MONITOR SWITCHES command 782 updates DB2 Information Center 835 Information Center 835 usedefaults file type modifier 494, 557 user IDs authorization 449 Utility for Kernel Parameter Values command 181
XML schema repository ADD XMLSCHEMA DOCUMENT command 339 COMPLETE XMLSCHEMA command 394 REGISTER XMLSCHEMA command 640 REGISTER XSROBJECT command 642
Z
zoned decimal file type modifier 557
V
validate precompile/bind option 583 version precompile option 583 Visual Explain tutorial 836 355,
W
WCHARTYPE precompiler option with Precompile command 583 wizards db2advis 23 Design Advisor 23 Work with TSM Archived Images command 15 workstations remote cataloging databases 372 removing catalog entries for databases from 745 uncataloging from local workstation 752
X
XBSA (Backup Services APIs) 349
852
Command Reference
Printed in USA
SC10-4226-00
Spine information:
IBM DB2
DB2 Version 9
Command Reference