DB2 Administration Guide
DB2 Administration Guide
Version 8
Administration Guide
SC18-7413-09
Version 8
Administration Guide
SC18-7413-09
Note Before using this information and the product it supports, be sure to read the general information under Notices on page 1437.
Tenth Edition, Softcopy Only (June 2009) This edition applies to Version 8 of IBM DB2 Universal Database for z/OS (DB2 UDB for z/OS), product number 5625-DB2, and to any subsequent releases until otherwise indicated in new editions. Make sure you are using the correct edition for the level of the product. This softcopy version is based on the printed edition of the book and includes the changes indicated in the printed version by vertical bars. Additional changes made to this softcopy version of the book since the hardcopy book was published are indicated by the hash (#) symbol in the left-hand margin. Editorial changes that have no technical significance are not noted. This and other books in the DB2 UDB for z/OS library are periodically updated with technical changes. These updates are made available to licensees of the product on CD-ROM and on the Web (currently at www.ibm.com/software/data/db2/zos/library.html). Check these resources to ensure that you are using the most current information. Copyright International Business Machines Corporation 1982, 2009. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
About this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Who should read this book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv Terminology and citations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi How to read the syntax diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxviii How to send your comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxviii
Part 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 1. System planning concepts . . . . . . . . . . . . . . . . . . . . . . . 3
The structure of DB2. . . . . . . . . . . . . . Data structures . . . . . . . . . . . . . . System structures . . . . . . . . . . . . . . More information about data structures . . . . . . Control and maintenance of DB2 . . . . . . . . . Commands . . . . . . . . . . . . . . . Utilities . . . . . . . . . . . . . . . . . High availability. . . . . . . . . . . . . . More information about control and maintenance of DB2 The DB2 environment . . . . . . . . . . . . . Address spaces . . . . . . . . . . . . . . DB2 lock manager . . . . . . . . . . . . . DB2 attachment facilities . . . . . . . . . . . DB2 and distributed data . . . . . . . . . . . DB2 and z/OS . . . . . . . . . . . . . . DB2 and the Parallel Sysplex . . . . . . . . . DB2 and the Security Server for z/OS . . . . . . DB2 and DFSMS. . . . . . . . . . . . . . More information about the z/OS environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 . 3 . 7 . 10 . 11 . 11 . 11 . 12 . 13 . 13 . 13 . 14 . 14 . 18 . 19 . 19 . 19 . 20 . 20
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
27 29 30 30 31 35 35 36 36 37 37 38 40 41 41 42
iii
| | | | |
| | | |
| | | |
iv
Administration Guide
Altering user-defined functions . . . . . . . . . . . Moving from index-controlled to table-controlled partitioning Changing the high-level qualifier for DB2 data sets . . . . Defining a new integrated catalog alias . . . . . . . Changing the qualifier for system data sets . . . . . . Changing qualifiers for other databases and user data sets Moving DB2 data . . . . . . . . . . . . . . . Tools for moving DB2 data . . . . . . . . . . . Moving a DB2 data set . . . . . . . . . . . . Copying a relational database . . . . . . . . . . Copying an entire DB2 subsystem . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . .
Contents
# # # | | | | | | | | | | | #
Example of roles and authorizations for a routine . . . . . Which IDs can exercise which privileges . . . . . . . . . Authorization for dynamic SQL statements . . . . . . . Composite privileges . . . . . . . . . . . . . . . Multiple actions in one statement. . . . . . . . . . . Matching job titles with privileges . . . . . . . . . . . Examples of granting and revoking privileges . . . . . . . Examples using the GRANT statement . . . . . . . . . Examples with secondary IDs . . . . . . . . . . . . Examples using the REVOKE statement . . . . . . . . Finding catalog information about privileges . . . . . . . . Retrieving information in the catalog . . . . . . . . . Creating views of the DB2 catalog tables . . . . . . . . Multilevel security. . . . . . . . . . . . . . . . . Introduction to multilevel security . . . . . . . . . . Implementing multilevel security with DB2 . . . . . . . Working with data in a multilevel-secure environment . . . Implementing multilevel security in a distributed environment . Data encryption through built-in functions . . . . . . . . Defining columns for encrypted data . . . . . . . . . Defining encryption at the column level . . . . . . . . Defining encryption at the value level . . . . . . . . . Ensuring accurate predicate evaluation for encrypted data . . Encrypting non-character values . . . . . . . . . . . Performance recommendations for data encryption . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
157 161 164 171 171 171 173 174 176 180 187 187 191 192 192 195 199 206 207 208 208 210 211 211 212
. . . . . . . . . . . . . . . 231
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 232 233 234 235 236 236 237 238 238 240 242 249 252 253 257
vi
Administration Guide
Translating outbound IDs . . . . . . . . . Sending passwords . . . . . . . . . . . Establishing RACF protection for DB2 . . . . . . Defining DB2 resources to RACF . . . . . . . Permitting RACF access . . . . . . . . . . Issuing DB2 commands . . . . . . . . . . Establishing RACF protection for stored procedures Establishing RACF protection for TCP/IP . . . . Establishing Kerberos authentication through RACF . Other methods of controlling access . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
260 262 264 265 267 274 274 278 278 279
# # # # #
. . . . . . . . . . . . . . . . . 281
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 281 283 283
# # # # # #
Contents
vii
Auditing payroll operations and payroll management . . . Securing administrator, owner, and other access . . . . . . Securing access by IDs with database administrator authority Securing access by IDs with system administrator authority . Securing access by owners with implicit privileges on objects Securing access by other users. . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
| | | | | | | | | | | | | | | | | | | | | | | | |
Chapter 17. Monitoring and controlling DB2 and its connections . . . . . . . . . . 367
Controlling DB2 databases Starting databases . . Monitoring databases. Stopping databases . Altering buffer pools . and buffer pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 368 369 375 376
viii
Administration Guide
Monitoring buffer pools . . . . . . . . Controlling user-defined functions . . . . . Starting user-defined functions . . . . . Monitoring user-defined functions . . . . Stopping user-defined functions . . . . . Controlling DB2 utilities . . . . . . . . . Starting online utilities . . . . . . . . Monitoring online utilities . . . . . . . Stand-alone utilities . . . . . . . . . Controlling the IRLM. . . . . . . . . . Starting the IRLM . . . . . . . . . . Modifying the IRLM . . . . . . . . . Monitoring the IRLM connection . . . . . Stopping the IRLM . . . . . . . . . Monitoring threads . . . . . . . . . . Controlling TSO connections . . . . . . . Connecting to DB2 from TSO . . . . . . Monitoring TSO and CAF connections . . . Disconnecting from DB2 while under TSO. . Controlling CICS connections . . . . . . . Connecting from CICS . . . . . . . . Controlling CICS connections . . . . . . Disconnecting from CICS . . . . . . . Controlling IMS connections . . . . . . . Connecting to the IMS control region . . . Controlling IMS dependent region connections Disconnecting from IMS . . . . . . . . Controlling RRS connections . . . . . . . Connecting to RRS using RRSAF . . . . . Monitoring RRSAF connections . . . . . Controlling connections to remote systems . . Starting the DDF . . . . . . . . . . Suspending and resuming DDF server activity Monitoring connections to other systems . . Monitoring and controlling stored procedures Using NetView to monitor errors . . . . . Stopping the DDF . . . . . . . . . . Controlling traces . . . . . . . . . . . Controlling the DB2 trace . . . . . . . Diagnostic traces for attachment facilities . . Diagnostic trace for the IRLM . . . . . . Controlling the resource limit facility (governor). Changing subsystem parameter values . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
377 377 377 378 378 379 379 379 380 380 381 381 382 382 383 384 385 385 387 387 388 389 391 392 392 397 400 401 401 403 404 405 405 406 417 420 421 422 423 424 424 425 425
Chapter 18. Managing the log and the bootstrap data set . . . . . . . . . . . . . 427
How database changes are made . . . . . . . Units of recovery . . . . . . . . . . . Rolling back work . . . . . . . . . . . Establishing the logging environment . . . . . Creation of log records . . . . . . . . . Retrieval of log records . . . . . . . . . Writing the active log. . . . . . . . . . Writing the archive log (offloading) . . . . . Controlling the log . . . . . . . . . . . Archiving the log . . . . . . . . . . . Dynamically changing the checkpoint frequency. Monitoring the system checkpoint . . . . . Setting limits for archive log tape units . . . . Displaying log information . . . . . . . . Resetting the log RBA . . . . . . . . . . Log RBA range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 427 428 429 429 429 430 430 435 435 437 438 438 438 438 439
Contents
ix
Resetting the log RBA value in a data sharing environment . . Resetting the log RBA value in a non-data sharing environment Managing the bootstrap data set (BSDS) . . . . . . . . . BSDS copies with archive log data sets . . . . . . . . . Changing the BSDS log inventory . . . . . . . . . . Discarding archive log records. . . . . . . . . . . . . Deleting archive logs automatically . . . . . . . . . . Locating archive log data sets . . . . . . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
Administration Guide
Backing up with DFSMS . . . . . . . . . . . . . . Backing up with RVA storage control or Enterprise Storage Server. Recovering page sets and data sets . . . . . . . . . . . . Recovering the work file database . . . . . . . . . . . Problems with the user-defined work file data set . . . . . . Problems with DB2-managed work file data sets . . . . . . Recovering error ranges for a work file table space . . . . . . Recovering the catalog and directory . . . . . . . . . . . Recovering data to a prior point of consistency . . . . . . . . Considerations for recovering to a prior point of consistency . . Using RECOVER to restore data to a previous point-in-time. . . Recovery of dropped objects . . . . . . . . . . . . . Discarding SYSCOPY and SYSLGRNX records . . . . . . . System-level point-in-time recovery . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
495 495 495 497 497 497 497 498 499 499 504 509 514 515
Contents
xi
Description of the recovery environment . . . . . . . . . . . . Communication failure recovery . . . . . . . . . . . . . . . Making a heuristic decision . . . . . . . . . . . . . . . . Recovery from an IMS outage that results in an IMS cold start . . . . . Recovery from a DB2 outage at a requester that results in a DB2 cold start . Recovery from a DB2 outage at a server that results in a DB2 cold start . . Correcting a heuristic decision. . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
Chapter 23. Recovery from BSDS or log failure during restart . . . . . . . . . . . 597
Log initialization or current status rebuild failure recovery . Description of failure during log initialization . . . . Description of failure during current status rebuild . . . Restart by truncating the log . . . . . . . . . . Failure during forward log recovery . . . . . . . . . Understanding forward log recovery failure . . . . . Starting DB2 by limiting restart processing . . . . . Failure during backward log recovery . . . . . . . . Understanding backward log recovery failure . . . . Bypassing backout before restarting . . . . . . . . Failure during a log RBA read request . . . . . . . . Unresolvable BSDS or log data set problem during restart . Preparing for recovery or restart . . . . . . . . . Performing fall back to a prior shutdown point . . . . Failure resulting from total or excessive loss of log data . . Total loss of the log . . . . . . . . . . . . . Excessive loss of data in the active log . . . . . . . Resolving inconsistencies resulting from a conditional restart Inconsistencies in a distributed environment . . . . . Procedures for resolving inconsistencies . . . . . . Method 1. Recover to a prior point of consistency . . . Method 2. Re-create the table space . . . . . . . . Method 3. Use the REPAIR utility on the data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599 600 601 601 608 609 609 613 614 614 615 616 617 617 619 619 620 622 622 622 623 624 624
xii
Administration Guide
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools . . . . . . . . . . . . . 671
Tuning database buffer pools . . . . . . . . . . . . Terminology: Types of buffer pool pages . . . . . . . Read operations . . . . . . . . . . . . . . . Write operations . . . . . . . . . . . . . . . Assigning a table space or index to a buffer pool . . . . Buffer pool thresholds . . . . . . . . . . . . . Determining size and number of buffer pools . . . . . Choosing a page-stealing algorithm . . . . . . . . . Long-term page fix option for buffer pools . . . . . . Monitoring and tuning buffer pools using online commands Using OMEGAMON to monitor buffer pool statistics . . . Tuning EDM storage . . . . . . . . . . . . . . . EDM storage space handling . . . . . . . . . . . Tips for managing EDM storage . . . . . . . . . . Increasing RID pool size. . . . . . . . . . . . . . Controlling sort pool size and sort processing . . . . . . Estimating the maximum size of the sort pool . . . . . How sort work files are allocated. . . . . . . . . . Improving the performance of sort processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671 672 672 672 673 673 677 680 681 681 683 685 686 688 689 690 690 690 691
| | |
Contents
xiii
Determining z/OS workload management velocity goals . . . . How DB2 assigns I/O priorities . . . . . . . . . . . . IBM System z9 Integrated Information Processor usage monitoring # Controlling resource usage . . . . . . . . . . . . . . . Prioritizing resources . . . . . . . . . . . . . . . . Limiting resources for each job . . . . . . . . . . . . Limiting resources for TSO sessions . . . . . . . . . . . Limiting resources for IMS and CICS . . . . . . . . . . Limiting resources for a stored procedure . . . . . . . . . Resource limit facility (governor) . . . . . . . . . . . . . Using resource limit tables (RLSTs) . . . . . . . . . . . Governing dynamic queries . . . . . . . . . . . . . Restricting bind operations . . . . . . . . . . . . . . Restricting parallelism modes . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
717 718 719 720 721 721 721 722 722 722 723 728 732 733
| | |
| |
| |
xiv
Administration Guide
| | |
Rewriting queries to influence access path selection . . . . . . . . . . . . . Writing efficient subqueries. . . . . . . . . . . . . . . . . . . . . . . Correlated subqueries . . . . . . . . . . . . . . . . . . . . . . . Noncorrelated subqueries . . . . . . . . . . . . . . . . . . . . . . Conditions for DB2 to transform a subquery into a join . . . . . . . . . . . . Subquery tuning . . . . . . . . . . . . . . . . . . . . . . . . . Using scrollable cursors efficiently . . . . . . . . . . . . . . . . . . . . Writing efficient queries on tables with data-partitioned secondary indexes. . . . . . . Special techniques to influence access path selection . . . . . . . . . . . . . . Obtaining information about access paths . . . . . . . . . . . . . . . . . Fetching a limited number of rows: FETCH FIRST n ROWS ONLY . . . . . . . . Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS . . . . . . Favoring index access . . . . . . . . . . . . . . . . . . . . . . . Using a subsystem parameter to control outer join processing . . . . . . . . . . Using the CARDINALITY clause to improve the performance of queries with user-defined references . . . . . . . . . . . . . . . . . . . . . . . . . . . Reducing the number of matching columns . . . . . . . . . . . . . . . . Creating indexes for efficient star-join processing . . . . . . . . . . . . . . Rearranging the order of tables in a FROM clause . . . . . . . . . . . . . . Updating catalog statistics . . . . . . . . . . . . . . . . . . . . . . Using a subsystem parameter . . . . . . . . . . . . . . . . . . . . . Giving optimization hints to DB2. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . table function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
782 785 785 786 788 790 790 792 794 794 795 795 798 798 798 799 801 803 804 805 806
Contents
xv
Compatibility of utilities. . . . . . . . . . Concurrency during REORG . . . . . . . . Utility operations with nonpartitioned indexes . . Monitoring of DB2 locking . . . . . . . . . . Using EXPLAIN to tell which locks DB2 chooses . Using the statistics and accounting traces to monitor Scenario for analyzing concurrency . . . . . . Deadlock detection scenarios . . . . . . . . . Scenario 1: Two-way deadlock with two resources . Scenario 2: Three-way deadlock with three resources
. . . . . . . . . . . . . . . locking . . . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
871 872 873 873 874 875 876 881 881 883
| | | | | | | | | | | | | | | | | | | | |
xvi
Administration Guide
| |
Is access through an index? (ACCESSTYPE is I, I1, N or MX) . . . . . Is access through more than one index? (ACCESSTYPE=M) . . . . . . How many columns of the index are used in matching? (MATCHCOLS=n) Is the query satisfied using only the index? (INDEXONLY=Y) . . . . . Is direct row access possible? (PRIMARY_ACCESSTYPE = D) . . . . . Is a view or nested table expression materialized? . . . . . . . . . Was a scan limited to certain partitions? (PAGE_RANGE=Y) . . . . . What kind of prefetching is expected? (PREFETCH = L, S, D, or blank) . . Is data accessed or processed in parallel? (PARALLELISM_MODE is I, C, or Are sorts performed? . . . . . . . . . . . . . . . . . . . Is a subquery transformed into a join? . . . . . . . . . . . . . When are aggregate functions evaluated? (COLUMN_FN_EVAL) . . . . How many index screening columns are used? . . . . . . . . . . Is a complex trigger WHEN clause used? (QBLOCKTYPE=TRIGGR) . . . Interpreting access to a single table . . . . . . . . . . . . . . . Table space scans (ACCESSTYPE=R PREFETCH=S) . . . . . . . . Overview of index access . . . . . . . . . . . . . . . . . Index access paths. . . . . . . . . . . . . . . . . . . . UPDATE using an index . . . . . . . . . . . . . . . . . Interpreting access to two or more tables (join) . . . . . . . . . . . Definitions and examples of join operations . . . . . . . . . . . Nested loop join (METHOD=1) . . . . . . . . . . . . . . . Merge scan join (METHOD=2) . . . . . . . . . . . . . . . Hybrid join (METHOD=4) . . . . . . . . . . . . . . . . . Star join (JOIN_TYPE=S) . . . . . . . . . . . . . . . . . Interpreting data prefetch . . . . . . . . . . . . . . . . . . Sequential prefetch (PREFETCH=S) . . . . . . . . . . . . . . Dynamic prefetch (PREFETCH=D) . . . . . . . . . . . . . . List prefetch (PREFETCH=L) . . . . . . . . . . . . . . . . Sequential detection at execution time . . . . . . . . . . . . . Determining sort activity . . . . . . . . . . . . . . . . . . Sorts of data. . . . . . . . . . . . . . . . . . . . . . Sorts of RIDs . . . . . . . . . . . . . . . . . . . . . The effect of sorts on OPEN CURSOR . . . . . . . . . . . . . Processing for views and nested table expressions . . . . . . . . . . Merge . . . . . . . . . . . . . . . . . . . . . . . . Materialization . . . . . . . . . . . . . . . . . . . . . Using EXPLAIN to determine when materialization occurs . . . . . . Using EXPLAIN to determine UNION activity and query rewrite . . . . Performance of merge versus materialization . . . . . . . . . . . Estimating a statements cost . . . . . . . . . . . . . . . . . Creating a statement table . . . . . . . . . . . . . . . . . Populating and maintaining a statement table . . . . . . . . . . Retrieving rows from a statement table . . . . . . . . . . . . . The implications of cost categories . . . . . . . . . . . . . .
. . . . . . . . X) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
944 944 945 946 946 948 948 949 949 949 950 950 950 950 951 951 952 954 958 959 959 961 963 965 967 973 973 974 974 976 977 977 978 978 979 979 980 982 983 985 985 986 988 988 989
Contents
xvii
Using DISPLAY THREAD . Using DB2 trace . . . . Tuning parallel processing. . Disabling query parallelism .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
Chapter 37. Monitoring and tuning stored procedures and user-defined functions
Controlling address space storage . . . . . . . . . . . . . . Assigning procedures and functions to WLM application environments . Providing DB2 cost information for accessing user-defined table functions Monitoring stored procedures with the accounting trace. . . . . . . Accounting for nested activities . . . . . . . . . . . . . . . Comparing the types of stored procedure address spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1023
. . . . . . 1023 1024 1025 1026 1028 1029
xviii
Administration Guide
Debugging connection and sign-on routines . . . . . . . . . . . Session variables in connection and sign-on routines . . . . . . . . Access control authorization exit routine . . . . . . . . . . . . . Specifying the access control authorization routine . . . . . . . . The default access control authorization routine . . . . . . . . . When the access control authorization routine is taken . . . . . . . Considerations for the access control authorization routine . . . . . . Parameter list for the access control authorization routine . . . . . . Expected output for the access control authorization routine . . . . . Debugging the access control authorization routine . . . . . . . . Determining whether the access control authorization routine is active . . Edit routines . . . . . . . . . . . . . . . . . . . . . . Specifying the edit routine routine . . . . . . . . . . . . . . When the edit routine is taken . . . . . . . . . . . . . . . Parameter lists for the edit routine . . . . . . . . . . . . . . Processing requirements for edit routines. . . . . . . . . . . . Incomplete rows and edit routines . . . . . . . . . . . . . . Expected output for edit routines . . . . . . . . . . . . . . Validation routines . . . . . . . . . . . . . . . . . . . . Specifying the validation routine . . . . . . . . . . . . . . When the validation routine is taken . . . . . . . . . . . . . Parameter lists for the validation routine . . . . . . . . . . . . Processing requirements for validation routines . . . . . . . . . Incomplete rows and validation routines . . . . . . . . . . . . Expected output for validation routines . . . . . . . . . . . . Date and time routines . . . . . . . . . . . . . . . . . . . Specifying the date and time routine . . . . . . . . . . . . . When date and time routines are taken . . . . . . . . . . . . Parameter lists for date and time routines . . . . . . . . . . . Expected output for date and time routines . . . . . . . . . . . Conversion procedures . . . . . . . . . . . . . . . . . . . Specifying a conversion procedure . . . . . . . . . . . . . . When conversion procedures are taken . . . . . . . . . . . . Parameter lists for conversion procedures . . . . . . . . . . . Expected output for conversion procedures . . . . . . . . . . . Field procedures . . . . . . . . . . . . . . . . . . . . . Field definition for field procedures . . . . . . . . . . . . . Specifying the field procedure . . . . . . . . . . . . . . . When field procedures are taken . . . . . . . . . . . . . . Control blocks for execution of field procedures . . . . . . . . . Field-definition (function code 8) . . . . . . . . . . . . . . Field-encoding (function code 0). . . . . . . . . . . . . . . Field-decoding (function code 4). . . . . . . . . . . . . . . Log capture routines . . . . . . . . . . . . . . . . . . . Specifying the log capture routine . . . . . . . . . . . . . . When log capture routines are taken . . . . . . . . . . . . . Parameter lists for log capture routines . . . . . . . . . . . . Routines for dynamic plan selection in CICS . . . . . . . . . . . Routine for CICS transaction invocation stored procedure . . . . . . . General considerations for writing exit routines . . . . . . . . . . Coding rules for exit routines . . . . . . . . . . . . . . . Modifying exit routines. . . . . . . . . . . . . . . . . . Execution environment for exit routines . . . . . . . . . . . . Registers at invocation for exit routines . . . . . . . . . . . . Parameter lists for exit routines . . . . . . . . . . . . . . . Row formats for edit and validation routines . . . . . . . . . . . Column boundaries for edit and validation routines . . . . . . . . Null values for edit procedures, field procedures, and validation routines . Fixed-length rows for edit and validation routines . . . . . . . . . Varying-length rows for edit and validation routines . . . . . . . . Varying-length rows with nulls for edit and validation routines . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1063 1064 1065 1066 1067 1067 1067 1070 1077 1080 1080 1081 1082 1082 1082 1083 1083 1084 1084 1085 1085 1085 1086 1086 1086 1087 1088 1088 1089 1089 1090 1090 1091 1091 1092 1093 1094 1094 1095 1096 1099 1102 1103 1105 1105 1106 1106 1107 1108 1108 1108 1109 1109 1109 1110 1110 1111 1111 1111 1111 1112
Contents
xix
Dates, times, and timestamps for edit and validation routines Parameter list for row format descriptions . . . . . . DB2 codes for numeric data in edit and validation routines. RACF access control module . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
xx
Administration Guide
Syntax for READA requests through IFI . . . . . Usage notes for READA requests through IFI . . . Asynchronous data and READA requests through IFI Example of READA requests through IFI . . . . . WRITE: Syntax and usage with IFI . . . . . . . . Authorization for WRITE requests through IFI . . . Syntax for WRITE requests through IFI . . . . . Usage notes for WRITE requests through IFI . . . Common communication areas for IFI calls . . . . . Instrument facility communications area (IFCA) . . Return area. . . . . . . . . . . . . . . IFCID area . . . . . . . . . . . . . . . Output area . . . . . . . . . . . . . . Using IFI in a data sharing group . . . . . . . . Interpreting records returned by IFI . . . . . . . Trace data record format . . . . . . . . . . Command record format . . . . . . . . . . Data integrity and IFI . . . . . . . . . . . . Auditing data and IFI . . . . . . . . . . . . Locking considerations for IFI . . . . . . . . . Recovery considerations for IFI . . . . . . . . . Errors and IFI . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
1177 1177 1178 1179 1179 1179 1179 1180 1180 1180 1184 1185 1185 1185 1187 1187 1188 1189 1189 1189 1190 1190
| |
| |
Contents
xxi
Establishing base values for real-time statistics . . . . . Contents of the real-time statistics tables . . . . . . . . Operating with real-time statistics . . . . . . . . . . When DB2 externalizes real-time statistics . . . . . . How DB2 utilities affect the real-time statistics . . . . . How non-DB2 utilities affect real-time statistics. . . . . Real-time statistics on objects in work file databases and the Real-time statistics for DEFINE NO objects . . . . . . Real-time statistics on read-only or nonmodified objects . . How dropping objects affects real-time statistics . . . . How SQL operations affect real-time statistics counters . . Real-time statistics in data sharing . . . . . . . . . Improving concurrency with real-time statistics . . . . Recovering the real-time statistics tables . . . . . . . Statistics accuracy . . . . . . . . . . . . . .
. . . . . . . . . . . . TEMP . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
1237 1237 1249 1250 1250 1257 1258 1258 1258 1258 1258 1259 1260 1260 1260
# # # # # # # # # # # # # # # # # # #
xxii
Administration Guide
# #
| | | | | | | | | | | | | | | | | | | | | | | | | |
Examples of DSNAIMS2 invocation . . . Connecting to multiple IMS subsystems with The DB2 EXPLAIN stored procedure . . . . Environment . . . . . . . . . . . Authorization required . . . . . . . . DSNAEXP syntax diagram . . . . . . DSNAEXP option descriptions . . . . . Example of DSNAEXP invocation . . . . DSNAEXP output . . . . . . . . . ADMIN_COMMAND_DB2 stored procedure . ADMIN_COMMAND_DSN stored procedure . ADMIN_COMMAND_UNIX stored procedure . ADMIN_DS_BROWSE stored procedure . . . ADMIN_DS_DELETE stored procedure . . . ADMIN_DS_LIST stored procedure . . . . ADMIN_DS_RENAME stored procedure . . . ADMIN_DS_SEARCH stored procedure . . . ADMIN_DS_WRITE stored procedure . . . . ADMIN_INFO_HOST stored procedure . . . ADMIN_INFO_SSID stored procedure . . . ADMIN_INFO_SYSPARM stored procedure . . ADMIN_JOB_CANCEL stored procedure. . . ADMIN_JOB_FETCH stored procedure . . . ADMIN_JOB_QUERY stored procedure . . . ADMIN_JOB_SUBMIT stored procedure . . . ADMIN_UTL_SCHEDULE stored procedure . ADMIN_UTL_SORT stored procedure. . . . Common SQL API stored procedures . . . . Versioning of XML documents . . . . . XML input documents . . . . . . . . XML output documents . . . . . . . XML message documents . . . . . . . GET_CONFIG stored procedure . . . . . GET_MESSAGE stored procedure . . . . GET_SYSTEM_INFO stored procedure . .
. . . . DSNAIMS2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1305 1306 1306 1306 1307 1307 1307 1308 1309 1309 1321 1324 1327 1330 1333 1339 1342 1344 1348 1352 1353 1357 1360 1363 1366 1369 1379 1386 1387 1388 1389 1390 1391 1410 1418
Contents
xxiii
xxiv
Administration Guide
Important In this version of DB2 UDB for z/OS, the DB2 Utilities Suite is available as an optional product. You must separately order and purchase a license to such utilities, and discussion of those utility functions in this publication is not intended to otherwise imply that you have a license to them. See Part 1 of DB2 Utility Guide and Reference for packaging details. The DB2 Utilities Suite is designed to work with the DFSORT program, which you are licensed to use in support of the DB2 utilities even if you do not otherwise license DFSORT for general use. If your primary sort product is not DFSORT, consider the following informational APARs mandatory reading: v II14047/II14213: USE OF DFSORT BY DB2 UTILITIES v II13495: HOW DFSORT TAKES ADVANTAGE OF 64-BIT REAL ARCHITECTURE These informational APARs are periodically updated.
Certain tasks require additional skills, such as knowledge of Transmission Control Protocol/Internet Protocol (TCP/IP) or Virtual Telecommunications Access Method (VTAM) to set up communication between DB2 subsystems, or knowledge of the IBM System Modification Program (SMP/E) to install IBM licensed programs.
xxv
OMEGAMON Refers to any of the following products: v IBM Tivoli OMEGAMON XE for DB2 Performance Expert on z/OS v IBM Tivoli OMEGAMON XE for DB2 Performance Monitor on z/OS v IBM DB2 Performance Expert for Multiplatforms and Workgroups v IBM DB2 Buffer Pool Analyzer for z/OS C, C++, and C language Represent the C or C++ programming language. CICS IMS Represents CICS Transaction Server for z/OS or CICS Transaction Server for OS/390. Represents the IMS Database Manager or IMS Transaction Manager.
MVS Represents the MVS element of the z/OS operating system, which is equivalent to the Base Control Program (BCP) component of the z/OS operating system. RACF Represents the functions that are provided by the RACF component of the z/OS Security Server.
xxvi
Administration Guide
If an optional item appears above the main path, that item has no effect on the execution of the statement and is used only for readability.
optional_item required_item
v If you can choose from two or more items, they appear vertically, in a stack. If you must choose one of the items, one item of the stack appears on the main path.
required_item required_choice1 required_choice2
If choosing one of the items is optional, the entire stack appears below the main path.
required_item optional_choice1 optional_choice2
If one of the items is the default, it appears above the main path and the remaining choices are shown below.
default_choice required_item optional_choice optional_choice
v An arrow returning to the left, above the main line, indicates an item that can be repeated.
required_item
repeatable_item
If the repeat arrow contains a comma, you must separate repeated items with a comma.
, required_item repeatable_item
A repeat arrow above a stack indicates that you can repeat the items in the stack. v Keywords appear in uppercase (for example, FROM). They must be spelled exactly as shown. Variables appear in all lowercase letters (for example, column-name). They represent user-supplied names or values. v If punctuation marks, parentheses, arithmetic operators, or other such symbols are shown, you must enter them as part of the syntax.
xxvii
Accessibility
Accessibility features help a user who has a physical disability, such as restricted mobility or limited vision, to use software products. The major accessibility features in z/OS products, including DB2 UDB for z/OS, enable users to: v Use assistive technologies such as screen reader and screen magnifier software v Operate specific or equivalent features by using only a keyboard v Customize display attributes such as color, contrast, and font size Assistive technology products, such as screen readers, function with the DB2 UDB for z/OS user interfaces. Consult the documentation for the assistive technology products for specific information when you use assistive technology to access these interfaces. Online documentation for Version 8 of DB2 UDB for z/OS is available in the Information management software for z/OS solutions information center, which is an accessible format when used with assistive technologies such as screen reader or screen magnifier software. The Information management software for z/OS solutions information center is available at the following Web site: http://publib.boulder.ibm.com/infocenter/dzichelp
xxviii
Administration Guide
xxix
v Allocating buffer pool storage on page 679 provides recommendations on allocating buffer pool storage. v Long-term page fix option for buffer pools on page 681 describes how to use the PGFIX keyword with the ALTER BUFFERPOOL command to fix a buffer pool in real storage for an extended period of time. v Evaluating your indexes on page 711 provides information about evaluating the effectiveness of your indexes, avoiding unnecessary indexes, and using nonpadded indexes. v Database access threads on page 741 introduces new terminology for database access threads. Type 1 inactive threads become inactive database access threads and type 2 inactive threads become inactive connections. Additionally, pooling behaviors are discussed. v Chapter 30, Tuning your queries , on page 751 includes discussion of the predicates IS DISTINCT FROM and IS NOT DISTINCT FROM. v Changing the access path at run time on page 779 describes the bind options REOPT(ONCE) and REOPT(ALWAYS). v Favoring index access on page 798 introduces the VOLATILE keyword and explains how to use it to minimize contention among certain applications. v Chapter 32, Using materialized query tables, on page 885 describes how to use materialized query tables to improve the performance of queries that require expensive join and aggregation operations, such as some queries that are used in data warehousing applications. v Dedicated virtual memory pool for star join operations on page 971 explains the advantages of a dedicated virtual memory pool for star join operations and demonstrates how to determine the size of the virtual memory pool. Appendix B, Writing exit routines has changed as follows: v Session variables in connection and sign-on routines on page 1064 explains how session variables work in connection and sign-on routines. v Creating materialized query tables on page 1070 discusses the authorization issues involved with creating a materialized query table. v RACF access control module on page 1114 describes the RACF access control module. Appendix D, Interpreting DB2 trace output has changed as follows: v Self-defining section on page 1147 describes the self-defining section for variable-length data items.
xxx
Administration Guide
Part 1. Introduction
Chapter 1. System planning concepts . . . . . 3 The structure of DB2. . . . . . . . . . . . 3 Data structures . . . . . . . . . . . . 3 Databases . . . . . . . . . . . . . 5 Storage groups . . . . . . . . . . . 5 Table spaces . . . . . . . . . . . . 5 Tables . . . . . . . . . . . . . . 6 Indexes . . . . . . . . . . . . . . 6 Views. . . . . . . . . . . . . . . 7 System structures . . . . . . . . . . . . 7 DB2 catalog. . . . . . . . . . . . . 7 DB2 directory . . . . . . . . . . . . 8 Active and archive logs . . . . . . . . . 8 Bootstrap data set (BSDS) . . . . . . . . 9 Buffer pools . . . . . . . . . . . . 9 Data definition control support database . . . 9 Resource limit facility database . . . . . . 10 Work file database . . . . . . . . . . 10 TEMP database . . . . . . . . . . . 10 More information about data structures . . . . 10 Control and maintenance of DB2 . . . . . . . 11 Commands . . . . . . . . . . . . . 11 Utilities . . . . . . . . . . . . . . . 11 High availability. . . . . . . . . . . . 12 Daily operations and tuning . . . . . . . 12 Backup and recovery . . . . . . . . . 12 Restart . . . . . . . . . . . . . . 13 More information about control and maintenance of DB2 . . . . . . . . . . . . . . . 13 The DB2 environment . . . . . . . . . . . 13 Address spaces . . . . . . . . . . . . 13 DB2 lock manager . . . . . . . . . . . 14 What IRLM does . . . . . . . . . . 14 Administering IRLM . . . . . . . . . 14 DB2 attachment facilities . . . . . . . . . 14 WebSphere . . . . . . . . . . . . 15 CICS . . . . . . . . . . . . . . 15 IMS . . . . . . . . . . . . . . . 16 TSO . . . . . . . . . . . . . . . 17 CAF . . . . . . . . . . . . . . . 18 RRS . . . . . . . . . . . . . . . 18 DB2 and distributed data . . . . . . . . . 18 DB2 and z/OS . . . . . . . . . . . . 19 DB2 and the Parallel Sysplex . . . . . . . 19 DB2 and the Security Server for z/OS . . . . 19 DB2 and DFSMS. . . . . . . . . . . . 20 More information about the z/OS environment 20
Administration Guide
Data structures
DB2 data structures described in this section include: Databases on page 5 Storage groups on page 5 Table spaces on page 5 Tables on page 6 Indexes on page 6 Views on page 7 The brief descriptions here show how the structures fit into an overall view of DB2. Figure 1 on page 4 shows how some DB2 structures contain others. To some extent, the notion of containment provides a hierarchy of structures. This section introduces those structures from the most to the least inclusive.
The DB2 objects that Figure 1 introduces are: Databases A set of DB2 structures that include a collection of tables, their associated indexes, and the table spaces in which they reside. Storage groups A set of volumes on disks that hold the data sets in which tables and indexes are actually stored. Table spaces A set of volumes on disks that hold the data sets in which tables and indexes are actually stored. Tables All data in a DB2 database is presented in tablescollections of rows all having the same columns. A table that holds persistent user data is a base table. A table that stores data temporarily is a temporary table. Indexes An index is an ordered set of pointers to the data in a DB2 table. The index is stored separately from the table. Views A view is an alternate way of representing data that exists in one or more tables. A view can include all or some of the columns from one or more base tables.
Administration Guide
Databases
A single database can contain all the data associated with one application or with a group of related applications. Collecting that data into one database allows you to start or stop access to all the data in one operation and grant authorization for access to all the data as a single unit. Assuming that you are authorized to do so, you can access data stored in different databases. If you create a table space or a table and do not specify a database, the table or table space is created in the default database, DSNDB04. DSNDB04 is defined for you at installation time. All users have the authority to create table spaces or tables in database DSNDB04. The system administrator can revoke those privileges and grant them only to particular users as necessary. When you migrate to Version 8, DB2 adopts the default database and default storage group you used in Version 7. You have the same authority for Version 8 as you did in Version 7.
Storage groups
The description of a storage group names the group and identifies its volumes and the VSAM (virtual storage access method) catalog that records the data sets. The default storage group, SYSDEFLT, is created when you install DB2. All volumes of a given storage group must have the same device type. But, as Figure 1 on page 4 suggests, parts of a single database can be stored in different storage groups.
Table spaces
A table space can consist of a number of VSAM data sets. Data sets are VSAM linear data sets (LDSs). Table spaces are divided into equal-sized units, called pages, which are written to or read from disk in one operation. You can specify page sizes for the data; the default page size is 4 KB. When you create a table space, you can specify the database to which the table space belongs and the storage group it uses. If you do not specify the database and storage group, DB2 assigns the table space to the default database and the default storage group. You also determine what kind of table spaces is created. Partitioned Divides the available space into separate units of storage called partitions. Each partition contains one data set of one table. You assign the number of partitions (from 1 to 4096) and you can assign partitions independently to different storage groups. Segmented Divides the available space into groups of pages called segments. Each segment is the same size. A segment contains rows from only one table. Large object (LOB) Holds large object data such as graphics, video, or very large text strings. A LOB table space is always associated with the table space that contains the logical LOB column values. The table space that contains the table with the LOB columns is called, in this context, the base table space. Simple Can contain more than one table. The rows of different tables are not kept separate (unlike segmented table spaces).
Chapter 1. System planning concepts
Tables
When you create a table in DB2, you define an ordered set of columns. Sample tables: The examples in this book are based on the set of tables described in Appendix A, DB2 sample tables, on page 1035. The sample tables are part of the DB2 licensed program and represent data related to the activities of an imaginary computer services company, the Spiffy Computer Services Company. Table 1 shows an example of a DB2 sample table.
Table 1. Example of a DB2 sample table (Department table) DEPTNO A00 B01 C01 D01 E01 D11 D21 E11 E21 DEPTNAME SPIFFY COMPUTER SERVICE DIV. PLANNING INFORMATION CENTER DEVELOPMENT CENTER SUPPORT SERVICES MANUFACTURING SYSTEMS ADMINISTRATION SYSTEMS OPERATIONS SOFTWARE SUPPORT 000050 000060 000070 000090 000100 MGRNO 000010 000020 000030 ADMRDEPT A00 A00 A00 A00 A00 D01 D01 E01 E01
The department table contains: v Columns: The ordered set of columns are DEPTNO, DEPTNAME, MGRNO, and ADMRDEPT. All the data in a given column must be of the same data type. v Row: Each row contains data for a single department. v Value: At the intersection of a column and row is a value. For example, PLANNING is the value of the DEPTNAME column in the row for department B01. v Referential constraints: You can assign a primary key and foreign keys to tables. DB2 can automatically enforce the integrity of references from a foreign key to a primary key by guarding against insertions, updates, or deletions that violate the integrity. Primary key: A column or set of columns whose values uniquely identify each row, for example, DEPTNO. Foreign key: Columns of other tables, whose values must be equal to values of the primary key of the first table (in this case, the department table). In the sample employee table, the column that shows what department an employee works in is a foreign key; its values must be values of the department number column in the department table.
Indexes
Each index is based on the values of data in one or more columns of a table. After you create an index, DB2 maintains the index, but you can perform necessary maintenance such as reorganizing it or recovering the index. Indexes take up physical storage in index spaces. Each index occupies its own index space. The main purposes of indexes are:
Administration Guide
v To improve performance. Access to data is often faster with an index than without. v To ensure that a row is unique. For example, a unique index on the employee table ensures that no two employees have the same employee number. Except for changes in performance, users of the table are unaware that an index is in use. DB2 decides whether to use the index to access the table. There are ways to influence how indexes affect performance when you calculate the storage size of an index and determine what type of index to use. An index can be partitioning, nonpartitioning, or clustered. For example, you can apportion data by last names, maybe using one partition for each letter of the alphabet. Your choice of a partitioning scheme is based on how an application accesses data, how much data you have, and how large you expect the total amount of data to grow.
Views
Views allow you to shield some table data from end users. A view can be based on other views or on a combination of views and tables. When you define a view, DB2 stores the definition of the view in the DB2 catalog. However, DB2 does not store any data for the view itself, because the data already exists in the base table or tables.
System structures
DB2 system structures described in this section include: DB2 catalog DB2 directory on page 8 Active and archive logs on page 8 Bootstrap data set (BSDS) on page 9 Buffer pools on page 9 Data definition control support database on page 9 Resource limit facility database on page 10 Work file database on page 10 TEMP database on page 10 In addition, Parallel Sysplex data sharing uses shared system structures.
DB2 catalog
The DB2 catalog consists of tables of data about everything defined to the DB2 system, including table spaces, indexes, tables, copies of table spaces and indexes, storage groups, and so forth. The system database DSNDB06 contains the DB2 catalog. When you create, alter, or drop any structure, DB2 inserts, updates, or deletes rows of the catalog that describe the structure and tell how the structure relates to other structures. For example, SYSIBM.SYSTABLES is one catalog table that records information when a table is created. DB2 inserts a row into SYSIBM.SYSTABLES that includes the table name, its owner, its creator, and the name of its table space and its database. Because the catalog consists of DB2 tables in a DB2 database, authorized users can use SQL statements to retrieve information from it.
The communications database (CDB) is part of the DB2 catalog. The CDB consists of a set of tables that establish conversations with remote database management systems (DBMSs). The distributed data facility (DDF) uses the CDB to send and receive distributed data requests.
DB2 directory
The DB2 directory contains information that DB2 uses during normal operation. You cannot access the directory using SQL, although much of the same information is contained in the DB2 catalog, for which you can submit queries. The structures in the directory are not described in the DB2 catalog. The directory consists of a set of DB2 tables stored in five table spaces in system database DSNDB01. Each of the table spaces listed in Table 2 is contained in a VSAM linear data set.
Table 2. Directory table spaces Table space name SCT02 Skeleton cursor (SKCT) SPT01 Skeleton package SYSLGRNX Log range Description Contains the internal form of SQL statements contained in an application. When you bind a plan, DB2 creates a skeleton cursor table in SCT02. Similar to SCT02 except that the skeleton package table is created when you bind a package. Tracks the opening and closing of table spaces, indexes, or partitions. By tracking this information and associating it with relative byte addresses (RBAs) as contained in the DB2 log, DB2 can reduce recovery time by reducing the amount of log that must be scanned for a particular table space, index, or partition. Contains a row for every utility job that is running. The row stays until the utility is finished. If the utility terminates without completing, DB2 uses the information in the row when you restart the utility. Contains internal information, called database descriptors (DBDs), about the databases that exist within DB2. Each database has exactly one corresponding DBD that describes the database, table spaces, tables, table check constraints, indexes, and referential relationships. A DBD also contains other information about accessing tables in the database. DB2 creates and updates DBDs whenever their corresponding databases are created or updated.
Administration Guide
# # # # |
v A single active log contains as many as 31 or 93 active log data sets, depending on the format of the BSDS. If the BSDS has been installed with support for the larger BSDS or converted to support it, then the maximum number of active log data sets is 93. Otherwise, the maximum number is 31. v With dual logging, the active log has twice the capacity for active log data sets, because two identical copies of the log records are kept. Each active log data set is a single-volume, single-extent VSAM LDS.
Buffer pools
Buffer pools are areas of virtual storage in which DB2 temporarily stores pages of table spaces or indexes. When an application program accesses a row of a table, DB2 retrieves the page containing that row and places the page in a buffer. If the needed data is already in a buffer, the application program does not have to wait for it to be retrieved from disk, significantly reducing the cost of retrieving the page. Buffer pools require monitoring and tuning. The size of buffer pools is critical to the performance characteristics of an application or group of applications that access data in those buffer pools. When you use Parallel Sysplex data sharing, buffer pools map to structures called group buffer pools. These structures reside in a special PR/SM LPAR logical partition called a coupling facility, which enables several DB2s to share information and control the coherency of data. | | | Buffer pools reside in DB2s DBM1 primary address space. This option offers the best performance. In Version 8, the maximum size of a buffer pool increases to 1 TB.
TEMP database
The TEMP database is for declared temporary tables only. DB2 stores all declared temporary tables in this database. You can create one TEMP database for each DB2 subsystem or data sharing member.
Table space size limits Table columns, data types Referential integrity System structures Shared system structures Catalog tables Catalog, data set naming conventions
DB2 Data Sharing: Planning and Administration Appendix F of DB2 SQL Reference DB2 Installation Guide
10
Administration Guide
Table 3. More information about DB2 structures (continued) For more information about... CDB Directory, data set naming conventions Logs BSDS usage, functions Buffer pools, tuning See... DB2 Installation Guide DB2 Installation Guide Chapter 18, Managing the log and the bootstrap data set, on page 427 Managing the bootstrap data set (BSDS) on page 441 v Chapter 27, Tuning DB2 buffer, EDM, RID, and sort pools, on page 671 v DB2 Command Reference DB2 Data Sharing: Planning and Administration Chapter 10, Controlling access through a closed application, on page 215 Resource limit facility (governor) on page 722 Volume 2 of DB2 SQL Reference
Group buffer pools Data definition control support database RLST Work file and TEMP database, defining
Commands
The commands are divided into the following categories: v DSN command and subcommands v DB2 commands v IMS commands v CICS attachment facility commands v IRLM commands v TSO CLIST commands | To enter a DB2 command from an authorized z/OS consolee, you use a subsystem command prefix (composed of 1 to 8 characters) at the beginning of the command. The default subsystem command prefix is -DSN1, which you can change when you install or migrate DB2. Example: The following command starts the DB2 subsystem that is associated with the command prefix -DSN1:
-DSN1 START DB2
Utilities
You use utilities to perform many of the tasks required to maintain DB2 data. Those tasks include loading a table, copying a table space, or recovering a database to a previous point in time.
11
The utilities run as batch jobs. DB2 interactive (DB2I) provides a simple way to prepare the job control language (JCL) for those jobs and to perform many other operations by entering values on panels. DB2I runs under TSO using ISPF services. A utility control statement tells a particular utility what task to perform.
High availability
It is not necessary to start or stop DB2 often. DB2 continually adds function to improve availability, especially in the following areas: v Daily operations and tuning v Backup and recovery v Restart on page 13
12
Administration Guide
Restart
A key to the perception of high availability is getting the DB2 subsystem back up and running quickly after an unplanned outage. v Some restart processing can occur concurrently with new work. Also, you can choose to postpone some processing. v During a restart, DB2 applies data changes from the log. This technique ensures that data changes are not lost, even if some data was not written at the time of the failure. Some of the process of applying log changes can run in parallel v You can register DB2 to the Automatic Restart Manager of z/OS. This facility automatically restarts DB2 should it go down as a result of a failure.
Address spaces
DB2 uses several different address spaces for the following purposes: | | | | Database services ssnmDBM1 manipulates most of the structures in user-created databases. In Version 8, storage areas such as buffer pools reside above the 2GB bar in the ssnmDBM1 address space. With 64-bit virtual addressing to access these storage areas, buffer pools to scale to extremely large sizes. System services ssnmMSTR performs a variety of system-related functions. Distributed data facility ssnmDIST provides support for remote requests. IRLM (Internal resource lock manager) IRLMPROC controls DB2 locking. DB2-established ssnmSPAS, for stored procedures, provides an isolated execution environment for user-written SQL programs at a DB2 server.
Chapter 1. System planning concepts
13
WLM-established Zero to many address spaces for stored procedures and user-defined functions. WLM-established address spaces are handled in order of priority and isolated from other stored procedures or user-defined functions running in other address spaces User address spaces At least one, possibly several, of the following types of user address spaces: v TSO v Batch v CICS v IMS dependent region v IMS control region
Administering IRLM
IRLM requires some control and monitoring. The external interfaces to the IRLM include: v Installation Install IRLM when you install DB2. Consider that locks take up storage, and adequate storage for IRLM is crucial to the performance of your system. Another important performance item is to make the priority of the IRLM address space above all the DB2 address spaces. | v Commands Some z/OS commands specifically for IRLM let you modify parameters, display information about the status of the IRLM and its storage use, and start and stop IRLM. v Tracing DB2s trace facility gives you the ability to trace lock interactions. You can use z/OS trace commands or IRLMPROC options to control diagnostic traces for IRLM. You normally use these traces under the direction of IBM Service.
| | |
14
Administration Guide
An attachment facility provides the interface between DB2 and another environment. Figure 2 shows the z/OS attachment facilities with interfaces to DB2.
WebSphere
CICS
CAF CAF
The z/OS environments include: v WebSphere v CICS (Customer Information Control System) v IMS (Information Management System) v TSO (Time Sharing Option) v Batch The z/OS attachment facilities include: v CICS v IMS v TSO v CAF (call attachment facility) v RRS (Resource Recovery Services) In the TSO and batch environments, you can use the TSO, CAF, and RRS attachment facilities to access DB2.
| | | |
WebSphere
WebSphere products that are integrated with DB2 include WebSphere Application Server, WebSphere Studio, and Transaction Servers & Tools. In the WebSphere environment, you can use the RRS attachment facility.
CICS
The Customer Information Control System (CICS) attachment facility provided with the CICS transaction server lets you access DB2 from CICS. After you start DB2, you can operate DB2 from a CICS terminal. You can start and stop CICS and DB2 independently, and you can establish or terminate the connection between them at any time. You also have the option of allowing CICS to connect to DB2 automatically. The CICS attachment facility also provides CICS applications with access to DB2 data while operating in the CICS environment. CICS applications, therefore, can access both DB2 data and CICS data. In case of system failure, CICS coordinates recovery of both DB2 and CICS data. CICS operations: The CICS attachment facility uses standard CICS command-level services where needed. Examples:
EXEC CICS WAIT EXEC CICS ABEND
Chapter 1. System planning concepts
15
A portion of the CICS attachment facility executes under the control of the transaction issuing the SQL requests. Therefore these calls for CICS services appear to be issued by the application transaction. With proper planning, you can include DB2 in a CICS XRF recovery scenario. Application programming with CICS: Programmers writing CICS command-level programs can use the same data communication coding techniques to write the data communication portions of application programs that access DB2 data. Only the database portion of the programming changes. For the database portions, programmers use SQL statements to retrieve or modify data in DB2 tables. To a CICS terminal user, application programs that access both CICS and DB2 data appear identical to application programs that access only CICS data. DB2 supports this cross-product programming by coordinating recovery resources with those of CICS. CICS applications can therefore access CICS-controlled resources as well as DB2 databases. Function shipping of SQL requests is not supported. In a CICS multi-region operation (MRO) environment, each CICS address space can have its own attachment to the DB2 subsystem. A single CICS region can be connected to only one DB2 subsystem at a time. System administration and operation with CICS: An authorized CICS terminal operator can issue DB2 commands to control and monitor both the attachment facility and DB2 itself. Authorized terminal operators can also start and stop DB2 databases. Even though you perform DB2 functions through CICS, you need to have the TSO attachment facility and ISPF to take advantage of the online functions supplied with DB2 to install and customize your system. You also need the TSO attachment to bind application plans and packages.
IMS
The Information Management System (IMS) attachment facility allows you to access DB2 from IMS. The IMS attachment facility receives and interprets requests for access to DB2 databases using exits provided by IMS subsystems. Usually, IMS connects to DB2 automatically with no operator intervention. In addition to Data Language I (DL/I) and Fast Path calls, IMS applications can make calls to DB2 using embedded SQL statements. In case of system failure, IMS coordinates recovery of both DB2 and IMS data. With proper planning, you can include DB2 in an IMS XRF recovery scenario. Application programming with IMS: With the IMS attachment facility, DB2 provides database services for IMS dependent regions. DL/I batch support allows users to access both IMS data (DL/I) and DB2 data in the IMS batch environment, which includes: v Access to DB2 and DL/I data from application programs. v Coordinated recovery through a two-phase commit process. v Use of the IMS extended restart (XRST) and symbolic checkpoint (CHKP) calls by application programs to coordinate recovery with IMS, DB2, and generalized sequential access method (GSAM) files.
16
Administration Guide
IMS programmers writing the data communication portion of application programs do not need to alter their coding technique to write the data communication portion when accessing DB2; only the database portions of the application programs change. For the database portions, programmers code SQL statements to retrieve or modify data in DB2 tables. To an IMS terminal user, IMS application programs that access DB2 appear identical to IMS. DB2 supports this cross-product programming by coordinating database recovery services with those of IMS. Any IMS program uses the same synchronization and rollback calls in application programs that access DB2 data as they use in IMS DB/DC application programs that access DL/I data. Another aid for cross-product programming is the DataPropagator NonRelational (DPropNR) licensed program. DPropNR allows automatic updates to DB2 tables when corresponding information in an IMS database is updated, and it allows automatic updates to an IMS database when a DB2 table is updated. System administration and operation with IMS: An authorized IMS terminal operator can issue DB2 commands to control and monitor DB2. The terminal operator can also start and stop DB2 databases. Even though you perform DB2 functions through IMS, you need the TSO attachment facility and ISPF to take advantage of the online functions supplied with DB2 to install and customize your system. You also need the TSO attachment facility to bind application plans and packages.
TSO
The Time Sharing Option (TSO) attachment facility is required for binding application plans and packages and for executing several online functions that are provided with DB2. Using the TSO attachment facility, you can access DB2 by running in either foreground or batch. You gain foreground access through a TSO terminal; you gain batch access by invoking the TSO terminal monitor program (TMP) from a batch job. The following two command processors are available: v DSN command processor Runs as a TSO command processor and uses the TSO attachment facility. v DB2 Interactive (DB2I) Consists of Interactive System Productivity Facility (ISPF) panels. ISPF has an interactive connection to DB2, which invokes the DSN command processor. Using DB2I panels, you can perform most DB2 tasks interactively, such as running SQL statements, commands, and utilities. Whether you access DB2 in foreground or batch, attaching through the TSO attachment facility and the DSN command processor makes access easier. DB2 subcommands that execute under DSN are subject to the command size limitations as defined by TSO. TSO allows authorized DB2 users or jobs to create, modify, and maintain databases and application programs. You invoke the DSN processor from the foreground by issuing a command at a TSO terminal. From batch, first invoke TMP from within a batch job, and then pass commands to TMP in the SYSTSIN data set.
17
After DSN is running, you can issue DB2 commands or DSN subcommands. You cannot issue a -START DB2 command from within DSN. If DB2 is not running, DSN cannot establish a connection to it; a connection is required so that DSN can transfer commands to DB2 for processing.
CAF
Most TSO applications must use the TSO attachment facility, which invokes the DSN command processor. Together, DSN and TSO provide services such as automatic connection to DB2, attention key support, and translation of return codes into error messages. However, when using DSN services, your application must run under the control of DSN. The call attachment facility (CAF) provides an alternative connection for TSO and batch applications needing tight control over the session environment. Applications using CAF can explicitly control the state of their connections to DB2 by using connection functions that CAF supplies.
RRS
| | z/OS Resource Recovery Services is a newer implementation of CAF with additional capabilities. RRS is a feature of z/OS that coordinates commit processing of recoverable resources in a z/OS system. DB2 supports use of these services for DB2 applications that use the RRS attachment facility provided with DB2. Use the RRS attachment to access resources such as SQL tables, DL/I databases, MQSeries messages, and recoverable VSAM files within a single transaction scope. The RRS attachment is required for stored procedures that run in a WLM-established address space.
18
Administration Guide
Use stored procedures to reduce processor and elapsed time costs of distributed access. A stored procedure is user-written SQL program that a requester can invoke at the server. By encapsulating the SQL, many fewer messages flow across the wire. Local DB2 applications can use stored procedures as well to take advantage of the ability to encapsulate SQL that is shared among different applications. The decision to access distributed data has implications for many DB2 activities: application programming, data recovery, authorization, and so on.
19
Much authorization to DB2 objects can be controlled directly from the Security Server. An exit routine (a program that runs as an extension of DB2) that is shipped with the z/OS Security Server lets you centralize access control.
20
Administration Guide
Table 5. More information about the z/OS environment (continued) For more information about... CICS connections CICS administration IMS XRF See... Chapter 17, Monitoring and controlling DB2 and its connections, on page 367 DB2 Installation Guide v Extended recovery facility (XRF) toleration on page 476 v IMS Administration Guide: System Volume 2 of DB2 Application Programming and SQL Guide IMS DataPropagator: An Introduction Volume 2 of DB2 Application Programming and SQL Guide Volume 1 of DB2 Application Programming and SQL Guide DB2 Data Sharing: Planning and Administration
DL/I batch DataPropagator NonRelational ISPF Distributed data Parallel Sysplex data sharing
21
22
Administration Guide
| | | |
27 27 29 30 30 31 32 32 34 35 35 36 36 37 37 38 40 41 41 42 43 43 44 44 45 45 46 48 48 51 52 53 54 54 54 55 55 56 57 58 58 58 59 59
Loading tables with the LOAD utility . . . Making corrections after LOAD . . . . . Loading data using the SQL INSERT statement Inserting a single row . . . . . . . . Inserting multiple rows . . . . . . . Special considerations when using INSERT statement to load tables . . . . . . . Loading data from DL/I . . . . . . . .
. . . . . . .
. . . . .
61 63 63 64 64
. 64 . 65 67 67 67 67 68 69 69 69 71 71 73 73 74 76 77 77 78 79 80 80 82 82 83 84 85 85 86 86 86 86 87 87 88 88 88 89 90 90
| |
| | | |
| | | | | | | | | | | |
| | | | | | | | | | | | | |
. 61
Chapter 6. Altering your database design . . . Altering DB2 storage groups. . . . . . . . . Letting SMS manage your DB2 storage groups . Adding or removing volumes from a DB2 storage group. . . . . . . . . . . . . Altering DB2 databases . . . . . . . . . . Altering table spaces . . . . . . . . . . . Changing the space allocation for user-managed data sets . . . . . . . . . . . . . . Dropping, re-creating, or converting a table space Altering tables . . . . . . . . . . . . . Adding a new column to a table . . . . . . Altering the data type of a column . . . . . What happens to the column . . . . . . What happens to an index on the column . . Table space versions . . . . . . . . . Altering a table for referential integrity . . . . Adding referential constraints to existing tables . . . . . . . . . . . . . . Adding parent keys and foreign keys. . . . Dropping parent keys and foreign keys . . . Adding or dropping table check constraints . . Adding a partition . . . . . . . . . . . Altering partitions . . . . . . . . . . . Changing the boundary between partitions . . Rotating partitions . . . . . . . . . . Extending the boundary of the last partition Registering or changing materialized query tables Registering an existing table as a materialized query table . . . . . . . . . . . . Changing a materialized query table to a base table . . . . . . . . . . . . . . . Changing the attributes of a materialized query table . . . . . . . . . . . . Changing the definition of a materialized query table . . . . . . . . . . . . Altering the assignment of a validation routine Altering a table for capture of changed data . . Changing an edit procedure or a field procedure Altering the subtype of a string column . . . . Altering the attributes of an identity column . . Changing data types by dropping and re-creating the table . . . . . . . . . . . . . . Implications of dropping a table . . . . . Check objects that depend on the table . . . Re-creating a table . . . . . . . . . .
23
| | | | |
| |
Moving a table to a table space of a different page size . . . . . . . . . . . . . . 91 Altering indexes . . . . . . . . . . . . . 92 Adding a new column to an index . . . . . 92 Altering how varying-length index columns are stored . . . . . . . . . . . . . . . 93 Altering the clustering index . . . . . . . 93 Rebalancing data in partitioned table spaces . . 93 Dropping and redefining an index. . . . . . 94 Index versions . . . . . . . . . . . . 94 Altering views . . . . . . . . . . . . . 96 Altering stored procedures . . . . . . . . . 96 Altering user-defined functions . . . . . . . . 96 Moving from index-controlled to table-controlled partitioning . . . . . . . . . . . . . . 97 Changing the high-level qualifier for DB2 data sets 98 Defining a new integrated catalog alias . . . . 99 Changing the qualifier for system data sets . . . 99 Step 1: Change the load module to reflect the new qualifier . . . . . . . . . . . . 99 Step 2: Stop DB2 with no outstanding activity . . . . . . . . . . . . . 100 Step 3: Rename system data sets with the new qualifier . . . . . . . . . . . 101 Step 4: Update the BSDS with the new qualifier . . . . . . . . . . . . . 102 Step 5: Establish a new xxxxMSTR cataloged procedure . . . . . . . . . . . . 102 Step 6: Start DB2 with the new xxxxMSTR and load module . . . . . . . . . . 102
Changing qualifiers for other databases and user data sets . . . . . . . . . . . . Changing your work database to use the new high-level qualifier . . . . . . . . . Changing user-managed objects to use the new qualifier . . . . . . . . . . . Changing DB2-managed objects to use the new qualifier . . . . . . . . . . . Moving DB2 data . . . . . . . . . . . . Tools for moving DB2 data . . . . . . . . Moving a DB2 data set . . . . . . . . . Copying a relational database . . . . . . . Copying an entire DB2 subsystem . . . . . Chapter 7. Estimating disk storage for user data Factors that affect storage . . . . . . . . . Calculating the space required for a table . . . . Calculating record lengths and pages . . . . Saving space with data compression . . . . . Estimating storage for LOBs . . . . . . . Estimating storage when using the LOAD utility Calculating the space required for a dictionary . . Disk requirements . . . . . . . . . . . Virtual storage requirements . . . . . . . Calculating the space required for an index . . . Levels of index pages. . . . . . . . . . Estimating storage from number of index pages
102 103 104 104 105 106 108 109 109 111 111 112 113 114 114 114 116 116 117 117 117 118
Advanced topics include: v Creating storage groups and managing DB2 data sets, which explores your options for allocating and managing data storage for table spaces and indexes v Implementing your database design, which covers creating table spaces, tables, indexes, referential constraints, and views v Loading data into DB2 tables, which provides an overview of the methods that you can use to load data into your DB2 tables v Altering your database design, which addresses altering DB2 storage groups, databases, table spaces, tables, indexes, views, stored procedures, and user-defined functions v Estimating disk storage for user data, which includes a discussion of the factors that affect storage and calculations for the space requirements of tables, dictionaries, and indexes
24
Administration Guide
Part 2 of DB2 Utility Guide and Reference Maintaining data integrity, including implications for the following utilities: COPY, QUIESCE, RECOVER, and REPORT Compressing data in a table space or a partition Designing and using materialized query tables Part 5 (Volume 2) of DB2 Administration Guide Part 5 (Volume 2) of DB2 Administration Guide
25
26
Administration Guide
27
v v
v v
them with access method service commands that support VSAM control-interval (CI) processing (for example, IMPORT and EXPORT). Exception: You can defer the allocation of data sets for table spaces and index spaces by specifying the DEFINE NO clause on the associated statement (CREATE TABLESPACE and CREATE INDEX), which also must specify the USING STOGROUP clause. For more information about deferring data set allocation, see either Deferring allocation of DB2-managed data sets on page 30 or Chapter 5 of DB2 SQL Reference. When a table space is dropped, DB2 automatically deletes the associated data sets. When a data set in a segmented or simple table space reaches its maximum size of 2 GB, DB2 might automatically create a new data set. The primary data set allocation is obtained for each new data set. When needed, DB2 can extend individual data sets. For more information, see Extending DB2-managed data sets on page 31. When you create or reorganize a table space that has associated data sets, DB2 deletes and then redefines them, reclaiming fragmented space. However, when you run REORG with the REUSE option and SHRLEVEL NONE, REORG resets and reuses DB2-managed data sets without deleting and redefining them. If the size of your table space is not changing, using the REUSE parameter could be more efficient. Exception: When reorganizing a LOB table space, DB2 does not delete and redefine the first data set that was allocated for the table space. If the REORG results in empty data sets beyond the first data set, DB2 deletes those empty data sets.
v When you want to move data sets to a new volume, you can alter the volumes list in your storage group. DB2 automatically relocates your data sets during the utility operations that build or rebuild a data set (LOAD REPLACE, REORG, REBUILD, and RECOVER). Note that if you use the REUSE option, DB2 does not delete and redefine the data sets and therefore does not move them. For a LOB table space, you can alter the volumes list in your storage group, and DB2 automatically relocates your data sets during the utility operations that build or rebuild a data set (LOAD REPLACE and RECOVER). To move user-defined data sets, you must delete and redefine the data sets in another location. For information about defining your own data sets, see Managing your own data sets on page 37. | | | | | | | | | | | | | | Control interval sizing: A control interval is a fixed-length area or disk in which VSAM stores records and creates distributed free space. It is the unit of information that VSAM transmits to or from disk. DB2 page sets are defined as VSAM linear data sets. Prior to Version 8, DB2 defined all data sets with VSAM control intervals that were 4 KB in size. Beginning in Version 8, DB2 can define data sets with variable VSAM control intervals. One of the biggest benefits of this change is an improvement in query processing performance. The VARY DS CONTROL INTERVAL parameter on installation panel DSNTIP7 allows you to control whether DB2managed data sets have variable VSAM control intervals: v A value of YES indicates that a DB2managed data set is created with a VSAM control interval that corresponds to the size of the buffer pool that is used for the table space. This is the default value.
28
Administration Guide
| | | | | | |
v A value of NO indicates that a DB2managed data set is created with a fixed VSAM control interval of 4 KB, regardless of the size of the buffer pool that is used for the table space. Table 7 shows the default and compatible control interval sizes for each table space page size. For example, a table space with pages 16 KB in size can have a VSAM control interval of 4 KB or 16 KB. Control interval sizing has no impact on indexes; index pages are always 4 KB in size.
Table 7. Default and compatible control interval sizes Table space page size 4 KB 8 KB 16 KB 32 KB Default control interval size 4 KB 8 KB 16 KB 32 KB Compatible control interval sizes 4 KB 4 KB, 8 KB 4 KB, 16 KB 4 KB, 32 KB
29
30
Administration Guide
31
Thereafter: v On CREATE TABLESPACE and CREATE INDEX statements, do not specify a value for the PRIQTY option. v On ALTER TABLESPACE and ALTER INDEX statements, specify a value of -1 for the PRIQTY option. Primary space allocation quantities do not exceed DSSIZE or PIECESIZE clause values. Exception: If the OPTIMIZE EXTENT SIZING parameter (MGEXTSZ) on installation panel DSNTIP7 is set to YES and the table space or index space has a SECQTY setting of greater than zero, the primary space allocation of each subsequent data set is the larger of the SECQTY setting and the value that is derived from a sliding scale algorithm. See Secondary space allocation for information about the sliding scale algorithm. For those situations in which the default primary quantity value is not large enough, you can specify a larger value for the PRIQTY option when creating or altering table spaces and indexes. DB2 always uses a PRIQTY value if one is explicitly specified. If you want to prevent DB2 from using the default value for primary space allocation of table spaces and indexes, specify a non-zero value for the TABLE SPACE ALLOCATION and INDEX SPACE ALLOCATION parameters on installation panel DSNTIP7. |
32
Administration Guide
v It prevents very large allocations for the remaining extents, which would likely cause fragmentation. v It does not require users to specify SECQTY values when creating and altering table spaces and index spaces. v It is theoretically possible to reach maximum data set size without running out of secondary extents. In the case of severe DASD fragmentation, it can take up to 5 extents to satisfy a logical extent request. In this situation, the data set does not reach the theoretical data set size. If you installed DB2 on the operating system z/OS Version 1 Release 7, or later, then you can modify the Extent Constraint Removal option. By setting the Extent Constraint Removal option to YES in the SMS data class, the maximum number of extents can be up to 7257. However, the limits of 123 extents per volume and a maximum volume count of 59 per data set remain valid. For more information, see Using VSAM extent constraint removal in the z/OS V1R7 guide DFSMS: Using the New Functions (order number SC26-7473-02). Maximum allocation is shown in Table 9. This table assumes that the initial extent that is allocated is one cylinder in size.
Table 9. Maximum allocation of secondary extents Maximum data set size, in GB 1 2 4 8 16 32 64 Maximum allocation, in cylinders 127 127 127 127 127 559 559 Extents required to reach full size 54 75 107 154 246 172 255
DB2 uses a sliding scale for secondary extent allocations of table spaces and indexes when: v You do not specify a value for the SECQTY option of a CREATE TABLESPACE or CREATE INDEX statement v You specify a value of -1 for the SECQTY option of an ALTER TABLESPACE or ALTER INDEX statement. Otherwise, DB2 always uses a SECQTY value for secondary extent allocations, if one is explicitly specified. Exception: For those situations in which the calculated secondary quantity value is not large enough, you can specify a larger value for the SECQTY option when creating or altering table spaces and indexes. However, in the case where the OPTIMIZE EXTENT SIZING parameter is set to YES and you specify a value for the SECQTY option, DB2 uses the value of the SECQTY option to allocate a secondary extent only if the value of the option is larger than the value that is derived from the sliding scale algorithm. The calculation that DB2 uses to make this determination is:
Actual secondary extent size = max ( min ( ss_extent, MaxAlloc ), SECQTY )
Chapter 3. Creating storage groups and managing DB2 data sets
33
In this calculation, ss_extent represents the value that is derived from the sliding scale algorithm, and MaxAlloc is either 127 or 559 cylinders, depending on the maximum potential data set size. This approach allows you to reach the maximum page set size faster. Otherwise, DB2 uses the value that is derived from the sliding scale algorithm. If you do not provide a value for the secondary space allocation quantity, DB2 calculates a secondary space allocation value equal to 10% of the primary space allocation value and subject to the following conditions: v The value cannot be less than 127 cylinders for data sets that range in initial size from less than 1 GB to 16 GB, and cannot be less than 559 cylinders for 32 GB and 64 GB data sets. v The value cannot be more than the value that is derived from the sliding scale algorithm. The calculation that DB2 uses for the secondary space allocation value is:
Actual secondary extent size = max ( 0.1 PRIQTY, min ( ss_extent, MaxAlloc ) )
In this calculation, ss_extent represents the value that is derived from the sliding scale algorithm, and MaxAlloc is either 127 or 559 cylinders, depending on the maximum potential data set size. Secondary space allocation quantities do not exceed DSSIZE or PIECESIZE clause values. If you do not want DB2 to extend a data set, you can specify a value of 0 for the SECQTY option. Specifying 0 is a useful way to prevent DSNDB07 work files from growing out of proportion. If you want to prevent DB2 from using the sliding scale for secondary extent allocations of table spaces and indexes, specify a value of NO for the OPTIMIZE EXTENT SIZING parameter on installation panel DSNTIP7. |
34
Administration Guide
Migrating to DFSMShsm
If you decide to use DFSMShsm for your DB2 data sets, you should develop a migration plan with your system administrator. With user-managed data sets, you can specify DFSMShsm classes on the Access Method Services DEFINE command. With DB2 storage groups, you need to develop automatic class selection routines. General-use Programming Interface To allow DFSMShsm to manage your DB2 storage groups, you can use one or more asterisks as volume IDs in your CREATE STOGROUP or ALTER STOGROUP statement, as shown here:
CREATE STOGROUP G202 VOLUMES ('*') VCAT DB2SMST;
End of General-use Programming Interface This example causes all database data set allocations and definitions to use nonspecific selection through DFSMShsm filtering services. When you use DFSMShsm and DB2 storage groups, you can use the system parameters SMSDCFL and SMSDCIX to assign table spaces and indexes to different DFSMShsm data classes. v SMSDCFL specifies a DFSMShsm data class for table spaces. If you assign a value to SMSDCFL, DB2 specifies that value when it uses Access Method Services to define a data set for a table space. v SMSDCIX specifies a DFSMShsm data class for indexes. If you assign a value to SMSDCIX, DB2 specifies that value when it uses Access Method Services to define a data set for an index. Before you set the data class system parameters, you need to do two things: v Define the data classes for your table space data sets and index data sets. v Code the SMS automatic class selection (ACS) routines to assign indexes to one SMS storage class and to assign table spaces to a different SMS storage class. For more information about creating data classes, see z/OS DFSMS: Implementing System-Managed Storage.
Chapter 3. Creating storage groups and managing DB2 data sets
35
36
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
For detailed instructions on how to create storage groups, see the z/OS DFSMSdss Storage Administration Reference. The DB2 BACKUP SYSTEM and RESTORE SYSTEM utilities invoke DFSMShsm to back up and restore the copy pools. DFSMShsm interacts with DFSMSsms to determine the volumes that belong to a given copy pool so that the volume-level backup and restore functions can be invoked. For information about the BACKUP SYSTEM and RESTORE SYSTEM utilities, see the DB2 Utility Guide and Reference. For information about recovery procedures that use these utilities, see System-level point-in-time recovery on page 515.
37
v You have a large linear table space on several data sets. If you manage your own data sets, you can better control the placement of individual data sets on the volumes (although you can keep a similar type of control by using single-volume DB2 storage groups). v You want to prevent deleting a data set within a specified time period, by using the TO and FOR options of the Access Method Services DEFINE and ALTER commands. You can create and manage the data set yourself, or you can create the data set with DB2 and use the ALTER command of Access Method Services to change the TO and FOR options. v You are concerned about recovering dropped table spaces. Your own data set is not automatically deleted when a table space is dropped, making it easier to reclaim the data. To define the required data sets, use DEFINE CLUSTER; to add secondary volumes to expanding data sets, use ALTER ADDVOLUMES; and to delete data sets, use DELETE CLUSTER. You must define a data set for each of these items: v A simple or segmented table space v A partition of a partitioned table space v A partition of a partitioned index Furthermore, as table spaces and index spaces expand, you might need to provide additional data sets. To take advantage of parallel I/O streams when doing certain read-only queries, consider spreading large table spaces over different disk volumes that are attached on separate channel paths.
| | | | | | | | | | | |
catname Integrated catalog name or alias (up to eight characters). Use the same name or alias here as in the USING VCAT clause of the CREATE TABLESPACE and CREATE INDEX statements.
38
Administration Guide
x dbname
C (for VSAM clusters) or D (for VSAM data components). DB2 database name. If the data set is for a table space, dbname must be the name given in the CREATE TABLESPACE statement. If the data set is for an index, dbname must be the name of the database containing the base table. If you are using the default database, dbname must be DSNDB04.
psname Table space name or index name. This name must be unique within the database. You use this name on the CREATE TABLESPACE or CREATE INDEX statement. (You can use a name longer than eight characters on the CREATE INDEX statement, but the first eight characters of that name must be the same as in the data sets psname.) y0001 Instance qualifier for the data set. Define one data set for the table space or index with a value of I for y if one of the following conditions is true: v You plan to run REORG with SHRLEVEL CHANGE or SHRLEVEL REFERENCE without the FASTSWITCH YES option. v You do not plan to run REORG with SHRLEVEL CHANGE or SHRLEVEL REFERENCE. Define two data sets if you plan to run REORG, using the FASTSWITCH YES option, with SHRLEVEL CHANGE or SHRLEVEL REFERENCE. Define one data set with a value of I for y, and one with a value of J for y. For more information about defining data sets for REORG, see Part 2 of DB2 Utility Guide and Reference. | znnn Data set number. The first digit z of the data set number is represented by the letter A, B, C, D, or E, which corresponds to the value 0, 1, 2, 3, or 4 as the first digit of the partition number. For partitioned table spaces, if the partition number is less than 1000, the data set number is Annn in the data set name (for example, A999 represents partition 999). For partitions 1000 to 1999, the data set number is Bnnn (for example, B000 represents partition 1000). For partitions 2000 to 2999, the data set number is Cnnn. For partitions 3000 to 3999, the data set number is Dnnn. For partitions 4000 up to a maximum of 4096, the data set number is Ennn. The naming convention for data sets that you define for a partitioned index is the same as the naming convention for other partitioned objects. For simple or segmented table spaces, the number is 001 (preceded by A) for the first data set. When little space is available, DB2 issues a warning message. If the size of the data set for a simple or a segmented table space approaches the maximum limit, define another data set with the same name as the first data set and the number 002. The next data set will be 003, and so on. You can reach the VSAM extent limit for a data set before you reach the size limit for a partitioned or a nonpartitioned table space. If this happens, DB2 does not extend the data set.
| | | | | | | | | | | | | | | | | | |
39
3. Use the DEFINE CLUSTER command to define the size of the primary and secondary extents of the VSAM cluster. If you specify zero for the secondary extent size, data set extension does not occur. 4. Define the data sets as LINEAR. Do not use RECORDSIZE or CONTROLINTERVALSIZE; these attributes are invalid. 5. Use the REUSE option. You must define the data set as REUSE before running the DSN1COPY utility. 6. Use SHAREOPTIONS(3,3). The DEFINE CLUSTER command has many optional parameters that do not apply when DB2 uses the data set. If you use the parameters SPANNED, EXCEPTIONEXIT, SPEED, BUFFERSPACE, or WRITECHECK, VSAM applies them to your data set, but DB2 ignores them when it accesses the data set. The value of the OWNER parameter for clusters that are defined for storage groups is the first SYSADM authorization ID specified at installation. When you drop indexes or table spaces for which you defined the data sets, you must delete the data sets unless you want to reuse them. To reuse a data set, first commit, and then create a new table space or index with the same name. When DB2 uses the new object, it overwrites the old information with new information, which destroys the old data. Likewise, if you delete data sets, you must drop the corresponding table spaces and indexes; DB2 does not drop these objects automatically.
DEFINE CLUSTER (NAME(DSNCAT.DSNDBC.DSNDB06.SYSUSER.I0001.A001) LINEAR REUSE VOLUMES(DSNV01) RECORDS(100 100) SHAREOPTIONS(3 3) ) DATA (NAME(DSNCAT.DSNDBD.DSNDB06.SYSUSER.I0001.A001) CATALOG(DSNCAT)
Figure 3. Defining a VSAM data set for the SYSUSER table space
For user-managed data sets, you must pre-allocate shadow data sets prior to running REORG with SHRLEVEL CHANGE, REORG with SHRLEVEL REFERENCE, or CHECK INDEX with SHRLEVEL CHANGE against the table space. You can specify the MODEL option for the DEFINE CLUSTER command so that the shadow is created like the original data set, as shown in Figure 4 on page 41.
40
Administration Guide
In Figure 4, the instance qualifiers x and y are distinct and are equal to either I or J. You must determine the correct instance qualifier to use for a shadow data set by querying the DB2 catalog for the database and table space. For more information about defining data sets for REORG, see Chapter 2 of DB2 Utility Guide and Reference. For more information about defining and managing VSAM data sets, see DFSMS/MVS: Access Method Services for the Integrated Catalog.
41
42
Administration Guide
The Official Introduction to DB2 UDB for z/OS Basic concepts in implementing a database design for DB2 Universal Database for z/OS, including: v Choosing names for DB2 objects v Implementing databases v Implementing table spaces, including reorganizing data v Implementing tables v Implementing indexes v Implementing referential constraints v Implementing views Details about SQL statements used to implement a database design (CREATE and DECLARE, for example) Loading tables with referential constraints Using the catalog in database design DB2 SQL Reference
The following topics provide additional information: v Implementing databases v v v v Implementing table spaces on page 44 Implementing tables on page 48 Implementing indexes on page 54 Using schemas on page 58
Implementing databases
In DB2 UDB for z/OS, a database is a logical collection of table spaces and index spaces. Consider the following factors when deciding whether to define a new database for a new set of objects: v You can start and stop an entire database as a unit; you can display the statuses of all its objects by using a single command that names only the database. Therefore, place a set of tables that are used together into the same database. (The same database holds all indexes on those tables.) v Some operations lock an entire database. For example, some phases of the LOAD utility prevent some SQL statements (CREATE, ALTER, and DROP) from using the same database concurrently. Therefore, placing many unrelated tables in a single database is often inconvenient.
43
When one user is executing a CREATE, ALTER, or DROP statement for a table, no other user can access the database that contains that table. QMF users, especially, might do a great deal of data definition; the QMF operations SAVE DATA and ERASE data-object are accomplished by creating and dropping DB2 tables. For maximum concurrency, create a separate database for each QMF user. v The internal database descriptors (DBDs) might become inconveniently large; Part 2 of DB2 Installation Guide contains some calculations showing how the size depends on the number of columns in a table. DBDs grow as new objects are defined, but they do not immediately shrink when objects are droppedthe DBD space for a dropped object is not reclaimed until the MODIFY RECOVERY utility is used to delete records of obsolete copies from SYSIBM.SYSCOPY. DBDs occupy storage and are the objects of occasional input and output operations. Therefore, limiting the size of DBDs is another reason to define new databases. The MODIFY utility is described in Part 2 of DB2 Utility Guide and Reference. If you use declared temporary tables, you must define a database that is defined AS TEMP (the TEMP database). DB2 stores all declared temporary tables in the TEMP database. The majority of the factors described in this section do not apply to the TEMP database. For details about declared temporary tables, see Distinctions between DB2 base tables and temporary tables on page 48.
| | # # # # # #
44
Administration Guide
You can create simple, segmented, partitioned, and LOB table spaces. For detailed information about CREATE TABLESPACE, see Chapter 5 of DB2 SQL Reference.
45
remember that some SQL operations, such as joins, can create a result row that does not fit in a 4-KB page. Therefore, having at least one work file that has 32-KB pages is recommended. (Work files cannot use 8-KB or 16-KB pages.) v When you can achieve higher density on disk by choosing a larger page size. For example, only one 2100-byte record can be stored in a 4-KB page, which wastes almost half of the space. However, storing the record in a 32-KB page can significantly reduce this waste. The downside with this approach is the potential of incurring higher buffer pool storage costs or higher I/O costsif you only touch a small number of rows, you are bringing a bigger chunk of data from disk into the buffer pool. Using 8-KB or 16-KB page sizes can let you store more data on your disk with less impact on I/O and buffer pool storage costs. If you use a larger page size and access is random, you might need to go back and increase the size of the buffer pool to achieve the same read-hit ratio you do with the smaller page size. v When a larger page size can reduce data sharing overhead. One way to reduce the cost of data sharing is to reduce the number of times the coupling facility must be accessed. Particularly for sequential processing, larger page sizes can reduce this number. More data can be returned on each access of the coupling facility, and fewer locks must be taken on the larger page size, further reducing coupling facility interactions. If data is returned from the coupling facility, each access that returns more data is more costly than those that return smaller amounts of data, but, because the total number of accesses is reduced, coupling facility overhead is reduced. For random processing, using an 8-KB or 16-KB page size instead of a 32-KB page size might improve the read-hit ratio to the buffer pool and reduce I/O resource consumption. | | | | | | The maximum number of partitions for a table space depends on the page size and on the DSSIZE. The size of the table space depends on how many partitions are in the table space and on the DSSIZE. For specific information about the maximum number of partitions and the total size of the table space, given the page size and the DSSIZE, see the CREATE TABLESPACE statement in Chapter 5 of DB2 SQL Reference.
46
Administration Guide
Table 13. Relationship between LOB size and data pages based on page size LOB size 262 144 bytes Page size 4 KB 8 KB 16 KB 32 KB 4 MB 4 KB 8 KB 16 KB 32 KB 33 MB 4 KB 8 KB 16 KB 32 KB LOB data pages 64 32 16 8 1029 513 256 128 8234 4106 2050 1024 % Non-LOB data or unused space 1.6 3.0 5.6 11.1 0.78 0.39 0.39 0.78 0.76 0.39 0.19 0.10
Choosing a page size based on average LOB size: If you know that all of your LOBs are not the same size, you can still make an estimate of what page size to choose. To estimate the average size of a LOB, you need to add a percentage to account for unused space and control information. To estimate the average size of a LOB value, use the following formula:
LOB size = (average LOB length) 1.05
Table 14 contains some suggested page sizes for LOBs with the intent to reduce the amount of I/O (getpages).
Table 14. Suggested page sizes based on average LOB length Average LOB size (n) n 4 KB 4 KB < n 8 KB 8 KB < n 16 KB 16 KB < n Suggested page size 4 KB 8 KB 16 KB 32 KB
The estimates in Table 14 mean that a LOB value of 17 KB can mean 15 KB of unused space. Again, you must analyze your data to determine what is best. General guidelines for LOBs of same size: If your LOBs are all the same size, you can fairly easily choose a page size that uses space efficiently without sacrificing performance. For LOBs that are all the same size, consider the alternative in Table 15 to maximize your space savings.
Table 15. Suggested page sizes when LOBs are the same size LOB size (y) y 4 KB 4 KB < y 8 KB 8 KB < y 12 KB 12 KB < y 16 KB Suggested page size 4 KB 8 KB 4 KB 16 KB
Chapter 4. Implementing your database design
47
Table 15. Suggested page sizes when LOBs are the same size (continued) LOB size (y) 16 KB < y 24 KB 24 KB < y 32 KB 32 KB < y 48 KB 48 KB < y Suggested page size 8 KB 32 KB 16 KB 32 KB
Implementing tables
This section discusses the following topics: v Distinctions between DB2 base tables and temporary tables v Implementing table-controlled partitioning on page 51
48
Administration Guide
Table 16. Important distinctions between DB2 base tables and DB2 temporary tables Area of distinction Creation, persistence, and ability to share table descriptions Base tables CREATE TABLE statement puts a description of the table in catalog table SYSTABLES. The table description is persistent and is shareable across application processes. The name of the table in the CREATE statement can be a two-part or three-part name. If the table name is not qualified, DB2 implicitly qualifies the name using the standard DB2 qualification rules applied to the SQL statements. Created temporary tables CREATE GLOBAL TEMPORARY TABLE statement puts a description of the table in catalog table SYSTABLES. The table description is persistent and is shareable across application processes. The name of the table in the CREATE statement can be a two-part or three-part name. If the table name is not qualified, DB2 implicitly qualifies the name using the standard DB2 qualification rules applied to the SQL statements. The table space used by created temporary tables is reset by the following commands: START DB2, START DATABASE, and START DATABASE(dbname) SPACENAM(tsname), where dbname is the name of the database and tsname is the name of the table space. Declared temporary tables DECLARE GLOBAL TEMPORARY TABLE statement does not put a description of the table in catalog table SYSTABLES. The table description is not persistent beyond the life of the application process that issued the DECLARE statement and the description is known only to that application process. Thus, each application process could have its own possibly unique description of the same table. The name of the table in the DECLARE statement can be a two-part or three-part name. If the table name is qualified, SESSION must be used as the qualifier for the owner (the second part in a three-part name). If the table name is not qualified, DB2 implicitly uses SESSION as the qualifier. The table space used by declared temporary tables is reset by the following commands: START DB2, START DATABASE, and START DATABASE(dbname) SPACENAM(tsname), where dbname is the name of the database and tsname is the name of the table space. Table instantiation and ability to share data CREATE TABLE statement creates one empty instance of the table, and all application processes use that one instance of the table. The table and data are persistent. CREATE GLOBAL TEMPORARY TABLE statement does not create an instance of the table. The first implicit or explicit reference to the table in an OPEN, SELECT, INSERT, or DELETE operation executed by any program in the application process creates an empty instance of the given table. Each application process has its own unique instance of the table, and the instance is not persistent beyond the life of the application process. DECLARE GLOBAL TEMPORARY TABLE statement creates an empty instance of the table for the application process. Each application process has its own unique instance of the table, and the instance is not persistent beyond the life of the application process.
# # # # # # # # # # # # # # # # # # # #
49
Table 16. Important distinctions between DB2 base tables and DB2 temporary tables (continued) Area of distinction References to the table in application processes Base tables References to the table name in multiple application processes refer to the same single persistent table description and same instance at the current server. If the table name being referenced is not qualified, DB2 implicitly qualifies the name using the standard DB2 qualification rules applied to the SQL statements. The name can be a two-part or three-part name. Created temporary tables References to the table name in multiple application processes refer to the same single persistent table description but to a distinct instance of the table for each application process at the current server. If the table name being referenced is not qualified, DB2 implicitly qualifies the name using the standard DB2 qualification rules applied to the SQL statements. The name can be a two-part or three-part name. Declared temporary tables References to that table name in multiple application processes refer to a distinct description and instance of the table for each application process at the current server. References to the table name in an SQL statement (other than the DECLARE GLOBAL TEMPORARY TABLE statement) must include SESSION as the qualifier (the first part in a two-part table name or the second part in a three-part name). If the table name is not qualified with SESSION, DB2 assumes the reference is to a base table. PUBLIC implicitly has all table privileges on the table without GRANT authority and has the authority to drop the table. These table privileges cannot be granted or revoked.
The owner implicitly has all table privileges on the table and the authority to drop the table. The owners table privileges can be granted and revoked, either individually or with the ALL clause. Another authorization ID can access the table only if it has been granted appropriate privileges for the table.
The owner implicitly has all table privileges on the table and the authority to drop the table. The owners table privileges can be granted and revoked, but only with the ALL clause; individual table privileges cannot be granted or revoked.
Any authorization ID can access the table without a Another authorization ID can grant of any privileges for access the table only if it has the table. been granted ALL privileges for the table. Indexes, UPDATE (searched or positioned), and DELETE (positioned only) are not supported. Locking, logging, and recovery do not apply. Work files are used as the space for the table. Indexes and SQL statements that modify data (INSERT, UPDATE, DELETE, and so on) are supported Some locking, logging, and limited recovery do apply. No row or table locks are acquired. Share-level locks on the table space and DBD are acquired. A segmented table lock is acquired when all the rows are deleted from the table or the table is dropped. Undo recovery (rolling back changes to a savepoint or the most recent commit point) is supported, but redo recovery (forward log recovery) is not supported.
Indexes and SQL statements that modify data (INSERT, UPDATE, DELETE, and so on) are supported. Locking, logging, and recovery do apply.
50
Administration Guide
Table 16. Important distinctions between DB2 base tables and DB2 temporary tables (continued) Area of distinction Table space and database operations Table space requirements and table size limitations Base tables Table space and database operations do apply. The table can be stored in simple table spaces in default database DSNDB04 or user-defined table spaces (simple, segmented, or partitioned) in user-defined databases. Created temporary tables Table space and database operations do not apply. The table is stored in table spaces in the work file database. Declared temporary tables Table space and database operations do apply. The table is stored in segmented table spaces in the TEMP database (a database that is defined AS TEMP). The table cannot span table spaces. Therefore, the size of the table is limited by the table space size (as determined by the primary and secondary space allocation values specified for the table spaces data sets) and the shared usage of the table space among multiple users. When the table space is full, an error occurs for the SQL operation.
The table can span work file table spaces. Therefore, the size of the table is limited by the number of available work The table cannot span table file table spaces, the size of each table space, and the spaces. Therefore, the size of the table is limited by the number of data set extents that are allowed for the table table space size (as determined by the primary spaces. Unlike the other types of tables, created and secondary space temporary tables do not allocation values specified reach size limitations as for the table spaces data easily. sets) and the shared usage of the table space among multiple users. When the table space is full, an error occurs for the SQL operation.
| | | | | | | | | | | | | | | | | | | | | | | | | |
Restriction: If you use table-controlled partitioning, you cannot specify the partitioning key and the limit key values by using the PART VALUES clause of the
51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
CREATE INDEX statement. (Note that in Version 8, the preferred syntax changed from PART VALUES to PARTITION ENDING AT.) Recommendation: Use table-controlled partitioning instead of index-controlled partitioning. Table-controlled partitioning is a replacement for index-controlled partitioning. Table 17 lists the differences between the two partitioning methods.
Table 17. Differences between table-controlled and index-controlled partitioning Table-controlled partitioning A partitioning index is not required; clustering index is not required. Multiple partitioned indexes can be created in a table space. Index-controlled partitioning A partitioning index is required; clustering index is required. Only one partitioned index can be created in a table space.
A table space partition is identified by both a A table space partition is identified by a physical partition number and a logical physical partition number. partition number. The high-limit key is always enforced. The high-limit key is not enforced if the table space is non-large.
See Moving from index-controlled to table-controlled partitioning on page 97 for detailed information about converting to table-controlled partitioning.
Automatic conversion
DB2 automatically converts an index-controlled partitioned table space to a table-controlled partitioned table space if you perform any of the following operations: v Use CREATE INDEX with the PARTITIONED clause to create a partitioned index on an index-controlled partitioned table space. v Use CREATE INDEX with a PART VALUES clause and without a CLUSTER clause to create a partitioning index. DB2 stores the specified high limit key value instead of the default high limit key value. v Use ALTER INDEX with the NOT CLUSTER clause on a partitioning index that is on an index-controlled partitioned table space. v Use DROP INDEX to drop a partitioning index on an index-controlled partitioned table space. v Use ALTER TABLE to add a new partition, change a partition boundary, or rotate a partition from first to last on an index-controlled partitioned table space. In these cases, DB2 automatically converts to table-controlled partitioning but does not automatically drop any indexes. DB2 assumes that any existing indexes are useful. After the conversion to table-controlled partitioning, DB2 changes the existing high-limit key value for non-large table spaces to the highest value for the key. Beginning in Version 8, DB2 enforces the high-limit key value. By default, DB2 does not put the last partition of the table space into a REORG-pending (REORP) state. Exceptions to this rule are: v When adding a new partition, DB2 stores the original high-limit key value instead of the default high-limit key value. If this value was not previously enforced, DB2 puts the last partition into a REORP state.
52
Administration Guide
| | | | | | | | | | | | | | | | # # # # | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v When rotating a new partition, DB2 stores the original high-limit key value instead of the default high-limit key value. DB2 puts the last partition into a REORP state. After the conversion to table-controlled partitioning, the SQL statements that you used to create the tables and indexes are no longer valid. For example, after dropping a partitioning index on an index-controlled partitioned table space, an attempt to recreate the index by issuing the same CREATE INDEX statement that you originally used would fail because the boundary partitions are now under the control of the table.
Because the CREATE TABLE statement does not specify the order in which to put entries, DB2 puts them in ascending order by default. DB2 subsequently prevents any INSERT into the TB table of a row with a null value for partitioning column C01. If the CREATE TABLE statement had specified the key as descending, DB2 would subsequently have allowed an INSERT into the TB table of a row with a null value for partitioning column C01. DB2 would have inserted the row into partition 1. With index-controlled partitioning, DB2 does not restrict the insertion of null values into a value with nullable partitioning columns. Example: Assume that a partitioned table space is created with the following SQL statements:
CREATE TABLESPACE TS IN DB USING STOGROUP SG NUMPARTS 4 BUFFERPOOL BP0;
Chapter 4. Implementing your database design
53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
CREATE TABLE TB (C01 CHAR(5), C02 CHAR(5) NOT NULL, C03 CHAR(5) NOT NULL) IN DB.TS; CREATE INDEX PI ON TB(C01) CLUSTER (PARTITION 1 ENDING AT ('10000'), PARTITION 2 ENDING AT ('20000'), PARTITION 3 ENDING AT ('30000'), PARTITION 4 ENDING AT ('40000'));
Regardless of the entry order, DB2 allows an INSERT into the TB table of a row with a null value for partitioning column C01. If the entry order is ascending, DB2 inserts the row into partition 4; if the entry order is descending, DB2 inserts the row into partition 1. Only if the table space is created with the LARGE keyword does DB2 prevent the insertion of a null value into the C01 column.
Implementing indexes
DB2 uses indexes not only to enforce uniqueness on column values, as for parent keys, but also to cluster data, to partition tables, to provide access paths to data for queries, and to order retrieved data without a sort. This section discusses the following topics: v Types of indexes v Using the NOT PADDED clause for indexes with varying-length columns on page 57 v Using indexes to avoid sorts on page 58
Types of indexes
This section describes the various types of indexes and summarizes what you should consider when creating a particular type: v Unique indexes v Clustering indexes on page 55 v Partitioning indexes on page 55 v Secondary indexes on page 56 This section uses a transaction table named TRANS to illustrate the various types of indexes. Assume that the table has many columns, but you are interested in only the following columns: v ACCTID, which is the customer account ID v STATE, which is the state of the customer location v POSTED, which holds the date of the transaction
Unique indexes
When you define a unique index on a DB2 table, you ensure that no duplicate values of the index key exist in the table. For example, creating a unique index on the ACCTID column ensures that no duplicate values of the customer account ID are in the TRANS table:
CREATE UNIQUE INDEX IX1 ON TRANS (ACCTID);
If an index key allows nulls for some of its column values, you can use the WHERE NOT NULL clause to ensure that the nonnull values of the index key are unique.
54
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Unique indexes are an important part of implementing referential constraints among the tables in your DB2 database. You cannot define a foreign key unless the corresponding primary key already exists and has a unique index defined on it. For more information, see Altering a table for referential integrity on page 77 and The Official Introduction to DB2 UDB for z/OS.
Clustering indexes
When you define a clustering index on a DB2 table, you direct DB2 to insert rows into the table in the order of the clustering key values. The first index that you define on the table serves implicitly as the clustering index unless you explicitly specify CLUSTER when you create or alter another index. For example, if you first define a unique index on the ACCTID column of the TRANS table, DB2 inserts rows into the TRANS table in the order of the customer account number unless you explicitly define another index to be the clustering index. You can specify CLUSTER for any index, whether or not it is a partitioning index. For example, suppose that you want the rows of the TRANS table to be ordered by the POSTED column. Issue the statement:
CREATE INDEX IX2 ON TRANS (POSTED) CLUSTER;
For more information, see Altering the clustering index on page 93 and The Official Introduction to DB2 UDB for z/OS.
Partitioning indexes
Before DB2 Version 8, when you defined a partitioning index on a table in a partitioned table space, you specified the partitioning key and the limit key values in the PART VALUES clause of the CREATE INDEX statement. This type of partitioning is referred to as index-controlled partitioning. Beginning with DB2 Version 8, you can define table-controlled partitioning with the CREATE TABLE statement (which is described in Implementing table-controlled partitioning on page 51). A partitioning index is then defined to be an index where the leftmost columns are the partitioning columns of the table; the index can, but need not, be partitioned. If it is partitioned, a partitioning index is partitioned in the same way as the underlying data in the table. For example, assume that the partitioning scheme for the TRANS table is defined by the CREATE TABLE statement in Implementing table-controlled partitioning on page 51. The rows in the table are partitioned by the transaction date in the POSTED column. The following statement defines an index where the column of the index is the same as the partitioning column of the table:
CREATE INDEX IX3 ON TRANS (POSTED);
When you create a partitioning index, DB2 puts the last partition of the table space into a REORG-pending (REORP) state. A partitioning index is optional; table-controlled partitioning is a replacement for index-controlled partitioning. For more information, see Moving from index-controlled to table-controlled partitioning on page 97.
55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Secondary indexes
A secondary index is any index that is not a partitioning index. You can create an index on a table to enforce a uniqueness constraint, to cluster data, or most typically to provide access paths to data for queries. The usefulness of an index depends on the columns in its key and on the cardinality of the key. Columns that you use frequently in performing selection, join, grouping, and ordering operations are good candidates for keys. In addition, the number of distinct values in an index key for a large table must be sufficient for DB2 to use the index for data retrieval; otherwise, DB2 could choose to perform a table space scan. A secondary index can be partitioned or not. This section discusses the two types of secondary indexes: v Nonpartitioned secondary index (NPSI) v Data-partitioned secondary index (DPSI) Nonpartitioned secondary index (NPSI): A nonpartitioned secondary index is any index that is not defined as a partitioning index or a partitioned index. You can create a nonpartitioned secondary index on a table that resides in a partitioned table space or a nonpartitioned table space. For example, assume that the transaction date in the POSTED column is the partitioning key of the TRANS table and that the rows are ordered by the transaction date. To create an index on the STATE column, issue the following statement:
CREATE INDEX IX4 ON TRANS(STATE);
DB2 can use this index to access data with a particular value for STATE. However, if the query includes a predicate that references only a single partition of the table, the keys for that partition are scattered throughout the index. A better solution to accessing single partitions is data-partitioned secondary indexes (DPSIs). Data-partitioned secondary index (DPSI): A data-partitioned secondary index is any index that is not defined as a partitioning index but is defined as a partitioned index. You can create a partitioned secondary index only on a table that resides in a partitioned table space. The partitioning scheme is the same as that of the data in the underlying table. That is, the index entries that reference data in physical partition 1 of a table reside in physical partition 1 of the index, and so on. DB2 puts a data-partitioned secondary index into a REBUILD-pending (RBDP) state if you create the index after performing any of the following actions: v Create a partitioned table space v Create a partitioning index v Insert a row into a table In this situation, the last partition of the table space is set to REORG-pending (REORP) restrictive status. Advantages and disadvantages of DPSIs: The use of data-partitioned secondary indexes promotes partition independence and therefore provides the following performance advantages, among others: v Eliminates contention between parallel LOAD PART jobs that target different partitions of a table space v Facilitates partition-level operations such as adding a new partition or rotating the first partition to be the last partition v Improves the recovery time of secondary indexes on partitioned table spaces
56
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
However, the use of data-partitioned secondary indexes does not always improve the performance of queries. For example, for queries with predicates that reference only the columns in the key of the DPSI, DB2 must probe each partition of the index for values that satisfy the predicate. Example: Assume that the transaction date in the POSTED column is the partitioning key of the table and that the rows are ordered by the transaction date. You want an index on the STATE column that is partitioned the same as the data in the table. Issue the following statement:
CREATE INDEX IX5 ON TRANS(STATE) PARTITIONED;
DB2 can use this index to access data with a particular value for STATE within partitions that are specified in a predicate of a query. Example: Assume that the transaction date in the POSTED column is the partitioning key of the table. You want a clustering index on the ACCTID column that is partitioned the same as the data in the table. Issue the following statement:
CREATE INDEX IX6 ON TRANS(ACCTID) PARTITIONED CLUSTER;
DB2 orders the rows of the table by the values of the columns in the clustering key and partitions the rows by the values of the limit key that is defined for the underlying table. The data rows are clustered within each partition by the key of the clustering index instead of by the partitioning key.
Using the NOT PADDED clause for indexes with varying-length columns
If you specify the NOT PADDED clause on a CREATE INDEX statement, any varying-length columns in the index key are not padded to their maximum length. If an existing index key includes varying-length columns, you can consider altering the index to use the NOT PADDED clause (see Altering how varying-length index columns are stored on page 93). Using the NOT PADDED clause has several advantages: v DB2 can use index-only access for the varying-length columns within the index key, which enhances performance. v DB2 stores only actual data, which reduces the storage requirements for the index key. However, using the NOT PADDED clause might also have several disadvantages: v Index key comparisons are slower because DB2 must compare each pair of corresponding varying-length columns individually instead of comparing the entire key when the columns are padded to their maximum length. v DB2 stores an additional 2-byte length field for each varying-length column. Therefore, if the length of the padding (to the maximum length) is less than or equal to 2 bytes, the storage requirements could actually be greater for varying-length columns that are not padded. Recommendation: Use the NOT PADDED clause to implement index-only access if your application typically accesses varying-length columns. Tip: Use the PAD INDEXES BY DEFAULT option on installation panel DSNTIPE to control whether varying length columns are padded by default.
Chapter 4. Implementing your database design
57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
DB2 can use any of the following index keys to satisfy the ordering: v CODE, DATE DESC, TIME ASC v CODE, DATE ASC, TIME DESC v DATE DESC, TIME ASC v DATE ASC, TIME DESC DB2 can ignore the CODE column in the ORDER BY clause and the index because the value of the CODE column in the result table of the query has no effect on the order of the data. If the CODE column is included, it can be in any position in the ORDER BY clause and in the index.
Using schemas
| A schema is a collection of named objects. The objects that a schema can contain include tables, indexes, table spaces, distinct types, functions, stored procedures, and triggers. An object is assigned to a schema when it is created. When a table, index, table space, distinct type, function, stored procedure, or trigger is created, it is given a qualified two-part name. The first part is the schema name (or the qualifier), which is either implicitly or explicitly specified. The default schema is the authorization ID of the owner of the plan or package. The second part is the name of the object.
Creating a schema
You can create a schema with the schema processor by using the CREATE SCHEMA statement. CREATE SCHEMA cannot be embedded in a host program or executed interactively. To process the CREATE SCHEMA statement, you must use the schema processor, as described in Processing schema definitions on page 59. The ability to process schema definitions is provided for conformance to ISO/ANSI standards. The result of processing a schema definition is identical to the result of executing the SQL statements without a schema definition.
58
Administration Guide
Outside of the schema processor, the order of statements is important. They must be arranged so that all referenced objects have been previously created. This restriction is relaxed when the statements are processed by the schema processor if the object table is created within the same CREATE SCHEMA. The requirement that all referenced objects have been previously created is not checked until all of the statements have been processed. For example, within the context of the schema processor, you can define a constraint that references a table that does not exist yet or GRANT an authorization on a table that does not exist yet. Figure 5 is an example of schema processor input that includes the definition of a schema.
CREATE SCHEMA AUTHORIZATION SMITH CREATE TABLE TESTSTUFF (TESTNO CHAR(4), RESULT CHAR(4), TESTTYPE CHAR(3)) CREATE TABLE STAFF (EMPNUM CHAR(3) NOT NULL, EMPNAME CHAR(20), GRADE DECIMAL(4), CITY CHAR(15)) CREATE VIEW STAFFV1 AS SELECT * FROM STAFF WHERE GRADE >= 12 GRANT INSERT ON TESTSTUFF TO PUBLIC GRANT ALL PRIVILEGES ON STAFF TO PUBLIC
59
60
Administration Guide
| | |
| |
61
| | | | | | |
Using delimited input and output files: The LOAD and UNLOAD utilities can accept or produce a delimited file, which is a sequential BSAM file with row delimiters and column delimiters. You can unload data from other systems into one or more files that use a delimited file format and then use these delimited files as input for the LOAD utility. You can also unload DB2 data into delimited files by using the UNLOAD utility and then use these files as input into another DB2 database. Using the INCURSOR option: The INCURSOR option of the LOAD utility specifies a cursor for the input data set. Use the EXEC SQL utility control statement to declare the cursor before running the LOAD utility. You define the cursor so that it selects data from another DB2 table. The column names in the SELECT statement must be identical to the column names of the table that is being loaded. The INCURSOR option uses the DB2 UDB family cross-loader function. Using the CCSID option: You can load input data into ASCII, EBCDIC, or Unicode tables. The ASCII, EBCDIC, and UNICODE options on the LOAD utility statement let you specify whether the format of the data in the input file is ASCII, EBCDIC, or Unicode. The CCSID option of the LOAD utility statement lets you specify the CCSIDs of the data in the input file. If the CCSID of the input data does not match the CCSID of the table space, the input fields are converted to the CCSID of the table space before they are loaded. Availability during load: For nonpartitioned table spaces, data in the table space that is being loaded is unavailable to other application programs during the load operation with the exception of LOAD SHRLEVEL CHANGE. In addition, some SQL statements, such as CREATE, DROP, and ALTER, might experience contention when they run against another table space in the same DB2 database while the table is being loaded. Default values for columns: When you load a table and do not supply a value for one or more of the columns, the action DB2 takes depends on the circumstances. v If the column is not a ROWID or identity column, DB2 loads the default value of the column, which is specified by the DEFAULT clause of the CREATE or ALTER TABLE statement. v If the column is a ROWID column that uses the GENERATED BY DEFAULT option, DB2 generates a unique value. v If the column is an identity column that uses the GENERATED BY DEFAULT option, DB2 provides a specified value. For ROWID or identity columns that use the GENERATED ALWAYS option, you cannot supply a value because this option means that DB2 always provides a value. LOB columns: The LOAD utility treats LOB columns as varying-length data. The length value for a LOB column must be 4 bytes. The LOAD utility can be used to load LOB data if the length of the row, including the length of the LOB data, does not exceed 32 KB. The auxiliary tables are loaded when the base table is loaded. You cannot specify the name of the auxiliary table to load. Replacing or adding data: You can use LOAD REPLACE to replace data in a single-table table space or in a multiple-table table space. You can replace all the data in a table space (using the REPLACE option), or you can load new records into a table space without destroying the rows that are already there (using the RESUME option).
62
Administration Guide
COPY-pending status: LOAD places a table space in the COPY-pending state if you load with LOG NO, which you might do to save space in the log. Immediately after that operation, DB2 cannot recover the table space. However, you can recover the table space by loading it again. Prepare for recovery, and remove the restriction, by making a full image copy using SHRLEVEL REFERENCE. (If you end the COPY job before it is finished, the table space is still in COPY-pending status.) When you use REORG or LOAD REPLACE with the COPYDDN keyword, a full image copy data set (SHRLEVEL REF) is created during the execution of the REORG or LOAD utility. This full image copy is known as an inline copy. The table space is not left in COPY-pending state regardless of which LOG option is specified for the utility. The inline copy is valid only if you replace the entire table space or partition. If you request an inline copy by specifying COPYDDN in a LOAD utility statement, an error message is issued, and the LOAD terminates if you specify LOAD RESUME YES or LOAD RESUME NO without REPLACE. REBUILD-pending status: LOAD places all the index spaces for a table space in the REBUILD-pending status if you end the job (using -TERM UTILITY) before it completes the INDEXVAL phase. It places the table space itself in RECOVER-pending status if you end the job before it completes the RELOAD phase. CHECK-pending status: LOAD places a table space in the CHECK-pending status if its referential or check integrity is in doubt. Because of this restriction, use of the CHECK DATA utility is recommended. That utility locates and, optionally, removes invalid data. If the CHECK DATA utility removes invalid data, the remaining data satisfies all referential and table check constraints, and the CHECK-pending restriction is lifted. LOAD does not set the CHECK-pending status for informational referential constraints.
| |
63
If you write an application program to load data into tables, you use that form of INSERT, probably with host variables instead of the actual values shown in this example.
The statement loads TEMPDEPT with data from the department table about all departments that report to department D01. | | | | | | | | | | | | | | If you embed the INSERT statement in an application program, you can use a form of the statement that inserts multiple rows into a table from the values that are provided in host variable arrays. In this form, you specify the table name, the columns into which the data is to be inserted, and the arrays that contain the data. Each array corresponds to a column. For example, you can load TEMPDEPT with the number of rows in the host variable num-rows by using the following embedded INSERT statement:
EXEC SQL INSERT INTO SMITH.TEMPDEPT FOR :num-rows ROWS VALUES (:hva1, :hva2, :hva3, :hva4);
Assume that the host variable arrays hva1, hva2, hva3, and hva4 are populated with the values that are to be inserted. The number of rows to insert must be less than or equal to the dimension of each host variable array.
64
Administration Guide
v If you are inserting a large number of rows, you can use the LOAD utility. Alternatively, use multiple INSERT statements with predicates that isolate the data that is to be loaded, and then commit after each insert operation. v When a table, whose indexes are already defined, is populated by using the INSERT statement, both the FREEPAGE and the PCTFREE parameters are ignored. FREEPAGE and PCTFREE are in effect only during a LOAD or REORG operation. v You can load a value for a ROWID column with an INSERT and fullselect only if the ROWID column is defined as GENERATED BY DEFAULT. If you have a table with a column that is defined as ROWID GENERATED ALWAYS, you can propagate non-ROWID columns from a table with the same definition. v You cannot use an INSERT statement on system-maintained materialized query tables. For information about materialized query tables, see Registering or changing materialized query tables on page 85. v REBUILD-pending (RBDP) status is set on a data-partitioned secondary index if you create the index after you insert a row into a table. In addition, the last partition of the table space is set to REORG-pending (REORP) restrictive status. v When you insert a row into a table that resides in a partitioned table space and the value of the first column of the limit key is null, the result of the INSERT depends on whether DB2 enforces the limit key of the last partition: When DB2 enforces the limit key of the last partition, the INSERT fails if the first column is ascending and MAXVALUE was not specified as the highest value of the key in the last partition. If the first column is ascending and MAXVALUE was specified as the highest value of the key in the last partition, DB2 places NULLs into the last partition. When DB2 enforces the limit key of the last partition, the rows are inserted into the first partition (if the first column is descending). When DB2 does not enforce the limit key of the last partition, the rows are inserted into the last partition (if the first column is ascending) or the first partition (if the first column is descending). DB2 enforces the limit key of the last partition for the following table spaces: Table spaces using table-controlled or index-controlled partitioning that are large (DSSIZE greater than, or equal to, 4 GB) Tables spaces using table-controlled partitioning that are large or non-large (any DSSIZE) For the complete syntax of the INSERT statement, see Chapter 5 of DB2 SQL Reference.
| | | | | | | | | # # # # # | | | | | | | | | | | |
65
66
Administration Guide
SMS manages every new data set that is created after the ALTER STOGROUP statement is executed; SMS does not manage data sets that are created before the execution of the statement. See Migrating to DFSMShsm on page 35 for more considerations for using SMS to manage data sets.
67
extend a data set, the volumes must have the same device type as the volumes that were used when the data set was defined. The changes you make to the volume list by ALTER STOGROUP have no effect on existing storage. Changes take effect when new objects are defined or when the REORG, RECOVER, or LOAD REPLACE utilities are used on those objects. For example, if you use ALTER STOGROUP to remove volume 22222 from storage group DSN8G810, the DB2 data on that volume remains intact. However, when a new table space is defined using DSN8G810, volume 22222 is not available for space allocation. To force a volume off and add a new volume, follow these steps: 1. Use the SYSIBM.SYSTABLEPART catalog table to determine which table spaces are associated with the storage group. For example, the following query indicates which table spaces use storage group DSN8G810:
SELECT TSNAME, DBNAME FROM SYSIBM.SYSTABLEPART WHERE STORNAME ='DSN8G810' AND STORTYPE = 'I';
2. Make an image copy of each table space; for example, COPY TABLESPACE dbname.tsname DEVT SYSDA. | | | 3. Ensure that the table space is not being updated in such a way that the data set might need to be extended. For example, you can stop the table space with the DB2 command STOP DATABASE (dbname) SPACENAM (tsname). 4. Use the ALTER STOGROUP statement to remove the volume that is associated with the old storage group and to add the new volume:
ALTER STOGROUP DSN8G810 REMOVE VOLUMES (VOL1) ADD VOLUMES (VOL2);
| |
Important: When a new volume is added, or when a storage group is used to extend a data set, the volumes must have the same device type as the volumes that were used when the data set was defined. 5. Start the table space with utility-only processing by using the DB2 command START DATABASE (dbname) SPACENAM (tsname) ACCESS(UT). 6. Use the RECOVER or REORG utility to move the data in each table space; for example, RECOVER dbname.tsname. (You can use only the RECOVER utility to move a LOB table space.) 7. Start the table space with START DATABASE (dbname) SPACENAM (tsname).
68
Administration Guide
INDEXBP Lets you change the name of the default buffer pool for the indexes within the database. The new default buffer pool is used only for new indexes; existing definitions do not change.
69
Another way of unloading data from your old tables and loading into the new tables is by using the INCURSOR option of the LOAD utility. This option uses the DB2 UDB family cross-loader function. 4. Alternatively, instead of unloading the data, you can insert the data from your old tables into the new tables by executing an INSERT statement for each table. For example:
INSERT INTO TB1 SELECT * FROM TA1;
If a table contains a ROWID column or an identity column and you want to keep the existing column values, you must define that column as GENERATED BY DEFAULT. If the ROWID column or identity column is defined with GENERATED ALWAYS, and you want DB2 to generate new values for that column, specify OVERRIDING USER VALUE on the INSERT statement with the subselect. 5. Drop the table space by executing the statement:
DROP TABLESPACE TS1;
The compression dictionary for the table space is dropped, if one exists. All tables in TS1 are dropped automatically. 6. Commit the DROP statement. 7. Create the new table space, TS1, and grant the appropriate user privileges. You can also create a partitioned table space. You could use the following statements:
CREATE TABLESPACE TS1 IN DSN8D81A USING STOGROUP DSN8G810 PRIQTY 4000 SECQTY 130 ERASE NO NUMPARTS 95 (PARTITION 45 USING STOGROUP DSN8G810 PRIQTY 4000 SECQTY 130 COMPRESS YES, PARTITION 62 USING STOGROUP DSN8G810 PRIQTY 4000 SECQTY 130 COMPRESS NO) LOCKSIZE PAGE BUFFERPOOL BP1 CLOSE NO;
8. Create new tables TA1, TA2, TA3, .... 9. Re-create indexes on the tables, and re-grant user privileges on those tables. See Implications of dropping a table on page 89 for more information. 10. Execute an INSERT statement for each table. For example:
INSERT INTO TA1 SELECT * FROM TB1;
If a table contains a ROWID column or an identity column and you want to keep the existing column values, you must define that column as GENERATED BY DEFAULT. If the ROWID column or identity column is defined with GENERATED ALWAYS, and you want DB2 to generate new values for that column, specify OVERRIDING USER VALUE on the INSERT statement with the subselect. 11. Drop table space TS2.
70
Administration Guide
If a table in the table space has been created with RESTRICT ON DROP, you must alter that table to remove the restriction before you can drop the table space. 12. Notify users to re-create any synonyms they had on TA1, TA2, TA3, .... 13. REBIND plans and packages that were invalidated as a result of dropping the table space.
Altering tables
When you alter a table, you do not change the data in the table; you merely change the specifications that you used in creating the table. With the ALTER TABLE statement, you can: v Add a new column to a table; see Adding a new column to a table. v Change the data type of a column, with certain restrictions; see Altering the data type of a column on page 73. v Add or drop a parent key or a foreign key; see Altering a table for referential integrity on page 77. v Add or drop a table check constraint; see Adding or dropping table check constraints on page 80. v Add a new partition to a table space; see Adding a partition on page 80. v Change the boundary between partitions, extend the boundary of the last partition, or rotate partitions; see Altering partitions on page 82. v Register an existing table as a materialized query table, change the attributes of a materialized query table, or change a materialized query table to a base table; see Registering or changing materialized query tables on page 85. v Change the VALIDPROC clause; see Altering the assignment of a validation routine on page 86. v Change the DATA CAPTURE clause; see Altering a table for capture of changed data on page 87. v Change the AUDIT clause, using the options ALL, CHANGES, or NONE. For the effects of the AUDIT value, see Chapter 12, Protecting data sets through RACF, on page 281. v Add or drop the restriction on dropping the table and the database and table space that contain the table; see DB2 SQL Reference. v Alter the length of a VARCHAR column using the SET DATA TYPE VARCHAR clause; see DB2 SQL Reference. In addition, this section includes techniques for making the following changes: v Changing an edit procedure or a field procedure on page 87 v Altering the subtype of a string column on page 88 v Altering the attributes of an identity column on page 88 v Changing data types by dropping and re-creating the table on page 88 v Moving a table to a table space of a different page size on page 91
| |
| | | | | |
71
# # # # # # #
TIME, or TIMESTAMP, you also specify the DEFAULT keyword, and you do not specify a constant (that is, you use the system default value). However, to use the new column in a program, you need to modify and recompile the program, and bind the plan or package again. You also might need to modify any program containing a static SQL statement SELECT *, which returns the new column after the plan or package is rebound. You also must modify any INSERT statement that does not contain a column list. Access time to the table is not affected immediately, unless the record was previously fixed length. If the record was fixed length, the addition of a new column causes DB2 to treat the record as variable length and then access time is affected immediately. To change the records to fixed length, follow these steps: 1. Run REORG with COPY on the table space, using the inline copy. 2. Run the MODIFY utility with the DELETE option to delete records of all image copies that were made before the REORG you ran in step 1. 3. Create a unique index if you add a column that specifies PRIMARY KEY. Inserting values in the new column might also degrade performance by forcing rows onto another physical page. You can avoid this situation by creating the table space with enough free space to accommodate normal expansion. If you already have this problem, run REORG on the table space to fix it. You can define the new column as NOT NULL by using the DEFAULT clause unless the column has a ROWID data type or is an identity column. If the column has a ROWID data type or is an identity column, you must specify NOT NULL without the DEFAULT clause. You can let DB2 choose the default value, or you can specify a constant or the value of the CURRENT SQLID or USER special register as the value to be used as the default. When you retrieve an existing row from the table, a default value is provided for the new column. Except in the following cases, the value for retrieval is the same as the value for insert: v For columns of data type DATE, TIME, and TIMESTAMP, the retrieval defaults are: Data type Default for retrieval DATE 0001-01-01 TIME 00.00.00 TIMESTAMP 0001-01-01-00.00.00.000000 v For DEFAULT USER and DEFAULT CURRENT SQLID, the retrieved value for rows that existed before the column was added is the value of the special register when the column was added. If the new column is a ROWID column, DB2 returns the same, unique row ID value for a row each time you access that row. Reorganizing a table space does not affect the values on a ROWID column. You cannot use the DEFAULT clause for ROWID columns. If the new column is an identity column (a column that is defined with the AS IDENTITY clause), DB2 places the table space in REORG-pending (REORP) status, and access to the table space is restricted until the table space is reorganized. When the REORG utility is run, DB2 v Generates a unique value for the identity column of each existing row v Physically stores these values in the database v Removes the REORP status You cannot use the DEFAULT clause for identity columns. For more information about identity columns, see DB2 SQL Reference.
72
Administration Guide
If the new column is a short string column, you can specify a field procedure for it; see Field procedures on page 1093. If you do specify a field procedure, you cannot also specify NOT NULL. The following example adds a new column to the table DSN8810.DEPT, which contains a location code for the department. The column name is LOCATION_CODE, and its data type is CHAR (4).
ALTER TABLE DSN8810.DEPT ADD LOCATION_CODE CHAR (4);
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The columns, as currently defined, have the following problems: v The ACCTID column allows for only 9999 customers.
Chapter 6. Altering your database design
73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v The NAME and ADDRESS columns were defined as fixed-length columns, which means that some of the longer values are truncated and some of the shorter values are padded with blanks. v The BALANCE column allows for amounts up to 99 999 999.99, but inflation rates demand that this column hold larger numbers. By altering the column data types in the following ways, you can make the columns more appropriate for the data that they contain. The INSERT statement that follows shows the kinds of values that you can now store in the ACCOUNTS table.
ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE COMMIT; ACCOUNTS ACCOUNTS ACCOUNTS ACCOUNTS ALTER ALTER ALTER ALTER COLUMN COLUMN COLUMN COLUMN NAME ADDRESS BALANCE ACCTID SET SET SET SET DATA DATA DATA DATA TYPE TYPE TYPE TYPE VARCHAR(40); VARCHAR(60); DECIMAL(15,2); INTEGER;
INSERT INTO ACCOUNTS (ACCTID, NAME, ADDRESS, BALANCE) VALUES (123456, 'LAGOMARSINO, MAGDALENA', '1275 WINTERGREEN ST, SAN FRANCISCO, CA, 95060', 0); COMMIT;
The NAME and ADDRESS columns can now handle longer values without truncation, and the shorter values are no longer padded. The BALANCE column is extended to allow for larger dollar amounts. DB2 saves these new formats in the catalog and stores the inserted row in the new formats. Recommendation: If you change both the length and the type of a column from fixed-length to varying-length by using one or more ALTER statements, issue the ALTER statements within the same unit of work. Reorganize immediately so that the format is consistent for all of the data rows in the table.
When the data type of the ACCTID column is altered from DECIMAL(4,0) to INTEGER, the IX1 index is placed in a REBUILD-pending (RBDP) state. Similarly, when the data type of the NAME column is altered from CHAR(20) to VARCHAR(40), the IX2 index is placed in an RBDP state. These indexes cannot be accessed until they are rebuilt from the data. Index inaccessibility and data availability. Whenever possible, DB2 tries to avoid using inaccessible indexes in an effort to increase data availability. Beginning in Version 8, DB2 allows you to insert into, and delete from, tables that have non-unique indexes which are currently in an RBDP state. DB2 also allows you to delete from tables that have unique indexes which are currently in an RBDP state. In certain situations, when an index is inaccessible, DB2 can bypass the index to allow applications access to the underlying data. In these situations, DB2 offers accessibility at the expense of possibly-degraded performance. In making its determination of the best access path, DB2 can bypass an index under the following circumstances: v Dynamic PREPAREs
74
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
DB2 avoids choosing an index that is in an RBDP state. Bypassing the index typically degrades performance, but provides availability that would not be possible otherwise. v Cached PREPAREs DB2 avoids choosing an index that is both in an RBDP state and within a cached PREPARE statement because the dynamic cache for the table is invalidated whenever an index is put into, or taken out of, an RBDP state. Avoiding indexes that are within cached PREPARE statements ensures that after an index is rebuilt, DB2 uses the index for all future queries. v PADDED and NOT PADDED indexes DB2 avoids choosing a PADDED index or a NOT PADDED index that is currently in an RBDP state for any static or dynamic SQL statements. In the case of static BINDs, DB2 might choose an index that is currently in an RBDP state as the best access path. It does so by making the optimistic assumption that the index will be available by the time it is actually used. (If the index is not available at that time, an application can receive a resource unavailable message.) Padding. Whether an index is padded or not padded depends on when the index was created and whether the index contains any varying length columns. In pre-Version 8 releases, an index is padded by default. In new-function mode for Version 8 or later, an index is not padded by default for new installations of DB2 , but will be padded for migrated systems which came from at least Version 7 forward. In Version 8 or later, this default behavior is specified by the value of the PAD INDEX BY DEFAULT parameter on the DSNTIPE installation panel, which can be set to YES or NO. When an index is not padded, the value of the PADDED column of the SYSINDEXES table is set to N. An index is only considered not padded when it is created with at least one varying length column and either: v The NOT PADDED keyword is specified. v The default padding value is NO. When an index is padded, the value of the PADDED column of the SYSINDEXES table is set to Y. An index is padded if it is created with at least one varying length column and either: v The PADDED keyword is specified v The default padding is YES. In the example of the ACCOUNTS table, the IX2 index retains its padding attribute. The padding attribute of an index is altered only if the value is inconsistent with the current state of the index. The value can be inconsistent, for example, if you change the value of the PADDED column in the SYSINDEXES table after creating the index. Consider the following information when you migrate indexes from one version of DB2 to the next version, or when you install a new DB2 subsystem and create indexes: v If the index was migrated from a pre-Version 8 release, the index is padded by default. In this case, the value of the PADDED column of the SYSINDEXES table is blank (PADDED = ). The PADDED column is also blank when there are no varying length columns.
75
| | | | | | | | | | | | | | | | | | # # | | | # # # | | | | | | | | | | | | | | | | | | | | | | |
v If a subsystem has been migrated from Version 7 to Version 8 compatibility mode or new-function mode, the default is to pad all indexes that have a key with at least one varying length column. In this case, the value of the PADDED column of the SYSINDEXES table is YES (PADDED = Y). v If an installed subsystem is new to Version 8 or later, and the install is done directly into new-function mode for Version 8 or later, indexes created with at least one varying length column are not padded by default. In this case, the PADDED column of the SYSINDEXES table is set to NO (PADDED = N).
Reorganizing table spaces: After you commit a schema change, DB2 puts the affected table space into an advisory REORG-pending (AREO*) state. The table space stays in this state until you run the REORG TABLESPACE utility, which reorganizes the table space and applies the schema changes. DB2 uses table space versions to maximize data availability. Table space versions enable DB2 to keep track of schema changes and simultaneously, provide users
76
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
with access to data in altered table spaces. When users retrieve rows from an altered table, the data is displayed in the format that is described by the most recent schema definition, even though the data is not currently stored in this format. The most recent schema definition is associated with the current table space version. Although data availability is maximized by the use of table space versions, performance might suffer because DB2 does not automatically reformat the data in the table space to conform to the most recent schema definition. DB2 defers any reformatting of existing data until you reorganize the table space with the REORG TABLESPACE utility. The more ALTER statements you commit between reorganizations, the more table space versions DB2 must track, and the more performance can suffer. Recommendation: Run the REORG TABLESPACE utility as soon as possible after a schema change to correct any performance degradation that might occur and to keep performance at its highest level. Recycling table space version numbers: DB2 can store up to 256 table space versions, numbered sequentially from 0 to 255. (The next consecutive version number after 255 is 1. Version number 0 is never reused; it is reserved for the original version of the table space.) The versions that are associated with schema changes that have not been applied yet are considered to be in use, and the range of used versions is stored in the catalog. In-use versions can be recovered from image copies of the table space, if necessary. To determine the range of version numbers currently in use for a table space, query the OLDEST_VERSION and CURRENT_VERSION columns of the SYSIBM.SYSTABLESPACE catalog table. To prevent DB2 from running out of table space version numbers (and to prevent subsequent ALTER statements from failing), you must recycle unused table space version numbers regularly by running the MODIFY RECOVERY utility. Version numbers are considered to be unused if the schema changes that are associated with them have been applied and there are no image copies that contain data at those versions. If all reusable version numbers (1 to 255) are currently in use, you must reorganize the table space by running REORG TABLESPACE before you can recycle the version numbers. For more information about managing table space version numbers, see DB2 Utility Guide and Reference.
77
Suppose that you want to define relationships among the sample tables by adding primary and foreign keys with the ALTER TABLE statement. The rules for these relationships are as follows: v An existing table must have a unique index on its primary key columns before you can add the primary key. The index becomes the primary index. v The parent key of the parent table must be added before the corresponding foreign key of the dependent table. You can build the same referential structure in several different ways. The following example sequence does not have the fewest number of possible operations, but it is perhaps the simplest to understand. 1. Create a unique index on the primary key columns for any table that does not already have one. 2. For each table, issue an ALTER TABLE statement to add its primary key. In the next steps, you issue an ALTER TABLE statement to add foreign keys for each table except the activity table. This leaves the table space in CHECK-pending status, which you reset by running the CHECK DATA utility with the DELETE(YES) option. CHECK DATA deletes are not bound by delete rules; they cascade to all descendents of a deleted row, which can be disastrous. For example, if you delete the row for department A00 from the department table, the delete might propagate through most of the referential structure. The remaining steps prevent deletion from more than one table at a time. 3. Add the foreign keys for the department table and run CHECK DATA DELETE(YES) on its table space. Correct any rows in the exception table, and use INSERT to replace them in the department table. This table is now consistent with existing data. 4. Drop the foreign key on MGRNO in the department table. This drops the association of the department table with the employee table, without changing the data of either table. 5. Add the foreign key to the employee table, run CHECK DATA again, and correct any errors. If errors are reported, be particularly careful not to make any row inconsistent with the department table when you make corrections. 6. Again, add the foreign key on MGRNO to the department table, which again leaves the table space in CHECK-pending status. Run CHECK DATA. If you have not changed the data since the previous check, you can use DELETE(YES) with no fear of cascading deletions. 7. For each of the following tables, in the order shown, add its foreign keys, run CHECK DATA DELETE(YES), and correct any rows that are in error: a. Project table b. Project activity table c. Employee to project activity table
78
Administration Guide
Adding a unique key: To add a unique key to an existing table, use the UNIQUE clause of the ALTER TABLE statement. For example, if the department table has a unique index defined on column DEPTNAME, you can add a unique key constraint, KEY_DEPTNAME, consisting of column DEPTNAME by issuing the following statement:
ALTER TABLE DSN8810.DEPT ADD CONSTRAINT KEY_DEPTNAME UNIQUE (DEPTNAME);
Adding a foreign key: To add a foreign key to an existing table, use the FOREIGN KEY clause of the ALTER TABLE statement. The parent key must exist in the parent table before you add the foreign key. For example, if the department table has a primary key defined on the DEPTNO column, you can add a referential constraint, REFKEY_DEPTNO, on the DEPTNO column of the project table by issuing the following statement:
ALTER TABLE DSN8810.PROJ ADD CONSTRAINT REFKEY_DEPTNO FOREIGN KEY (DEPTNO) REFERENCES DSN8810.DEPT ON DELETE RESTRICT;
Considerations: Adding a parent key or a foreign key to an existing table has the following restrictions and implications: v If you add a primary key, the table must already have a unique index on the key columns. If multiple unique indexes include the primary key columns, the index that was most recently created on the key columns becomes the primary index. Because of the unique index, no duplicate values of the key exist in the table; therefore you do not need to check the validity of the data. v If you add a unique key, the table must already have a unique index with a key that is identical to the unique key. If multiple unique indexes include the primary key columns, DB2 arbitrarily chooses a unique index on the key columns to enforce the unique key. Because of the unique index, no duplicate values of the key exist in the table; therefore you do not need to check the validity of the data. v You can use only one FOREIGN KEY clause in each ALTER TABLE statement; if you want to add two foreign keys to a table, you must execute two ALTER TABLE statements. v If you add a foreign key, the parent key and unique index of the parent table must already exist. Adding the foreign key requires the ALTER privilege on the dependent table and either the ALTER or REFERENCES privilege on the parent table. v Adding a foreign key establishes a referential constraint relationship, with the many implications that are described in Part 2 of DB2 Application Programming and SQL Guide. DB2 does not validate the data when you add the foreign key. Instead, if the table is populated (or, in the case of a nonsegmented table space, if the table space has ever been populated), the table space that contains the table is placed in CHECK-pending status, just as if it had been loaded with ENFORCE NO. In this case, you need to execute the CHECK DATA utility to clear the CHECK-pending status. | | | v You can add a foreign key with the NOT ENFORCED option to create an informational referential constraint. This action does not leave the table space in CHECK-pending status, and you do not need to execute CHECK DATA.
79
Application programs often depend on that identifier. The foreign key defines a referential relationship and a delete rule. Without the key, your application programs must enforce the constraints. Dropping a foreign key: When you drop a foreign key using the DROP FOREIGN KEY clause of the ALTER TABLE statement, DB2 drops the corresponding referential relationships. (You must have the ALTER privilege on the dependent table and either the ALTER or REFERENCES privilege on the parent table.) If the referential constraint references a unique key that was created implicitly, and no other relationships are dependent on that unique key, the implicit unique key is also dropped. Dropping a unique key: When you drop a unique key using the DROP UNIQUE clause of the ALTER TABLE statement, DB2 drops all the referential relationships in which the unique key is a parent key. The dependent tables no longer have foreign keys. (You must have the ALTER privilege on any dependent tables.) The tables unique index that enforced the unique key no longer indicates that it enforces a unique key, although it is still a unique index. Dropping a primary key: When you drop a primary key using the DROP PRIMARY KEY clause of the ALTER TABLE statement, DB2 drops all the referential relationships in which the primary key is a parent key. The dependent tables no longer have foreign keys. (You must have the ALTER privilege on any dependent tables.) The tables primary index is no longer primary, although it is still a unique index.
Adding a partition
You can use the ALTER TABLE statement to add a partition to an existing partitioned table space and to each partitioned index in the table space.
80
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Restriction: You cannot add a new partition to an existing partitioned table space if the table has LOB columns. Additionally, you cannot add or alter a partition for a materialized query table. When you add a partition, DB2 uses the next physical partition that is not already in use until you reach the maximum number of partitions for the table space. When DB2 manages your data sets, the next available data set is allocated for the table space and for each partitioned index. When you manage your own data sets, you must first define the data sets for the table space and the partitioned indexes before issuing the ALTER TABLE ADD PARTITION statement. Example: Assume that a table space that contains a transaction table named TRANS is divided into 10 partitions, and each partition contains one year of data. Partitioning is defined on the transaction date, and the limit key value is the end of the year. Table 18 shows a representation of the table space.
Table 18. Initial table space with 10 partitions Partition P001 P002 P003 P004 P005 P006 P007 P008 P009 P010 Limit value 12/31/1994 12/31/1995 12/31/1996 12/31/1997 12/31/1998 12/31/1999 12/31/2000 12/31/2001 12/31/2002 12/31/2003 Data set name that backs the partition catname.DSNDBx.dbname.psname.I0001.A001 catname.DSNDBx.dbname.psname.I0001.A002 catname.DSNDBx.dbname.psname.I0001.A003 catname.DSNDBx.dbname.psname.I0001.A004 catname.DSNDBx.dbname.psname.I0001.A005 catname.DSNDBx.dbname.psname.I0001.A006 catname.DSNDBx.dbname.psname.I0001.A007 catname.DSNDBx.dbname.psname.I0001.A008 catname.DSNDBx.dbname.psname.I0001.A009 catname.DSNDBx.dbname.psname.I0001.A010
Assume that you want to add a new partition to handle the transactions for the next year. To add a partition, issue the following statement:
ALTER TABLE TRANS ADD PARTITION ENDING AT ('12/31/2004');
What happens: v DB2 adds a new partition to the table space and to each partitioned index on the TRANS table. For the table space, DB2 uses the existing table space PRIQTY and SECQTY attributes of the previous partition for the space attributes of the new partition. For each partitioned index, DB2 uses the existing PRIQTY and SECQTY attributes of the previous index partition. v When the ALTER completes, you can use the new partition immediately if the table space is a large table space. In this case, the partition is not placed in a REORG-pending (REORP) state because it extends the high-range values that were not previously used. For non-large table spaces, the partition is placed in a REORG-pending (REORP) state because the last partition boundary was not previously enforced. Table 19 on page 82 shows a representative excerpt of the table space after the partition for the year 2004 was added.
81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 19. An excerpt of the table space after adding a new partition (P011) Partition P008 P009 P010 P011 Limit value 12/31/2001 12/31/2002 12/31/2003 12/31/2004 Data set name that backs the partition catname.DSNDBx.dbname.psname.I0001.A008 catname.DSNDBx.dbname.psname.I0001.A009 catname.DSNDBx.dbname.psname.I0001.A010 catname.DSNDBx.dbname.psname.I0001.A011
Specifying space attributes: If you want to specify the space attributes for a new partition, use the ALTER TABLESPACE and ALTER INDEX statements. For example, suppose the new partition is PARTITION 11 for the table space and the index. Issue the following statements to specify quantities for the PRIQTY, SECQTY, FREEPAGE, and PCTFREE attributes:
ALTER TABLESPACE tsname ALTER PARTITION 11 USING STOGROUP stogroup-name PRIQTY 200 SECQTY 200 FREEPAGE 20 PCTFREE 10; ALTER INDEX index-name ALTER PARTITION 11 USING STOGROUP stogroup-name PRIQTY 100 SECQTY 100 FREEPAGE 25 PCTFREE 5;
Recommendation: When you create your partitioned table space, you do not need to allocate extra partitions for expected growth. Instead, use ALTER TABLE ADD PARTITION to add partitions as needed.
Altering partitions
You can use the ALTER TABLE statement to change the boundary between partitions, to rotate the first partition to be the last partition, or to extend the boundary of the last partition. Example: Assume that a table space that contains a transaction table named TRANS is divided into 10 partitions, and each partition contains one year of data. Partitioning is defined on the transaction date, and the limit key value is the end of the year. Table 18 on page 81 shows a representation of the table space.
Now the data in the first quarter of the year 2003 is part of partition 9. The partitions on either side of the new boundary (partitions 9 and 10) are placed in REORG-pending (REORP) status and are not available until the partitions are reorganized. Alternatively, you can rebalance the data in partitions 9 and 10 by using the REBALANCE option of the REORG utility:
REORG TABLESPACE dbname.tsname PART(9:10) REBALANCE
82
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
This method avoids leaving the partitions in a REORP state. When you use the REBALANCE option on partitions, DB2 automatically changes the limit key values.
Rotating partitions
Assume that the partition structure of the table space is sufficient through the year 2004. Table 19 on page 82 shows a representation of the table space through the year 2004. When another partition is needed for the year 2005, you determine that the data for 1994 is no longer needed. You want to recycle the partition for the year 1994 to hold the transactions for the year 2005. To rotate the first partition in the table TRANS to be the last partition, issue the following statement:
ALTER TABLE TRANS ROTATE PARTITION FIRST TO LAST ENDING AT ('12/31/2005') RESET;
For a table with limit values in ascending order, the data in the ENDING AT clause must be higher than the limit value for previous partitions. DB2 chooses the first partition to be the partition with the lowest limit value. For a table with limit values in descending order, the data must be lower than the limit value for previous partitions. DB2 chooses the first partition to be the partition with the highest limit value. The RESET keyword specifies that the existing data in the oldest partition is deleted, and no delete triggers are activated. What happens: v Because the oldest (or first) partition is P001, DB2 assigns the new limit value to P001. This partition holds all rows in the range between the new limit value of 12/31/2005 and the previous limit value of 12/31/2004. v The RESET operation deletes all existing data. You can use the partition immediately after the ALTER completes. The partition is not placed in REORGpending (REORP) status because it extends the high-range values that were not previously used. Table 20 shows a representation of the table space after the first partition is rotated to become the last partition.
Table 20. Rotating the low partition to the end Partition P002 P003 P004 P005 P006 P007 P008 P009 P010 P011 P001 Limit value 12/31/1995 12/31/1996 12/31/1997 12/31/1998 12/31/1999 12/31/2000 12/31/2001 12/31/2002 12/31/2003 12/31/2004 12/31/2005 Data set name that backs the partition catname.DSNDBx.dbname.psname.I0001.A002 catname.DSNDBx.dbname.psname.I0001.A003 catname.DSNDBx.dbname.psname.I0001.A004 catname.DSNDBx.dbname.psname.I0001.A005 catname.DSNDBx.dbname.psname.I0001.A006 catname.DSNDBx.dbname.psname.I0001.A007 catname.DSNDBx.dbname.psname.I0001.A008 catname.DSNDBx.dbname.psname.I0001.A009 catname.DSNDBx.dbname.psname.I0001.A010 catname.DSNDBx.dbname.psname.I0001.A011 catname.DSNDBx.dbname.psname.I0001.A001
83
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Recommendation: When you create a partitioned table space, you do not need to allocate extra partitions for expected growth. Instead, use either ALTER TABLE ADD PARTITION to add partitions as needed or, if rotating partitions is appropriate for your application, use ALTER TABLE ROTATE PARTITION to avoid adding another partition. Execute the RUNSTATS utility after rotating the partition. Nullable partitioning columns: DB2 lets you use nullable columns as partitioning columns. But with table-controlled partitioning, DB2 can restrict the insertion of null values into a table with nullable partitioning columns, depending on the order of the partitioning key. After a rotate operation: v If the partitioning key is ascending, DB2 prevents an INSERT of a row with a null value for the key column. v If the partitioning key is descending, DB2 allows an INSERT of a row with a null value for the key column. The row is inserted into the first partition.
You can use the partition immediately after the ALTER completes. The partition is not placed in REORG-pending (REORP) status because it extends the high-range values that were not previously used. Changing back to the previous boundary: Table 20 on page 83 shows a representation of the table space through the year 2005, where each year of data is saved into separate partitions. Assume that you changed the limit key for P001 to be 12/31/2006 so that the data for the year 2005 and the data for the year 2006 is saved into one partition. To change the limit key back to 12/31/2005, issue the following statement:
ALTER TABLE TRANS ALTER PARTITION 1 ENDING AT ('12/31/2005');
This partition is placed in REORG-pending (REORP) status because some of the data could fall outside of the boundary that is defined by the limit key value of 12/31/2005. You can take either of the following corrective actions: v Run REORG with the DISCARD option to clear the REORG-pending status, set the new partition boundary, and discard the data rows that fall outside of the new boundary. v Add a new partition for the data rows that fall outside of the current partition boundaries. Adding a partition when the last partition is in REORP: Assume that you extended the boundary of the last partition and then changed back to the previous boundary for that partition. Table 20 on page 83 shows a representation of the table space through the year 2005. The last partition is in REORP. You want to add a new partition with a limit key value of 12/31/2006. You can use ALTER TABLE ADD PARTITION because this limit key value is higher than the previous limit key value of 12/31/2005.
84
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The new partition is placed in REORG-pending (REORP) status because it inherits the REORP state from the previous partition. You can now reorganize the table space or only the last two partitions without discarding any of the data rows.
You want to take advantage of automatic query rewrite for TRANSCOUNT by registering it as a materialized query table. You can do this by issuing the following ALTER TABLE statement:
ALTER TABLE TRANSCOUNT ADD MATERIALIZED QUERY AS ( SELECT ACCTID, LOCID, YEAR, COUNT(*) AS CNT FROM TRANS GROUP BY ACCTID, LOCID, YEAR ) DATA INITIALLY DEFERRED REFRESH DEFERRED MAINTAINED BY USER;
This statement registers TRANSCOUNT with its associated subselect as a materialized query table, and DB2 can now use it in automatic query rewrite. The data in TRANSCOUNT remains the same, as specified by the DATA INITIALLY DEFERRED option. You can still maintain the data, as specified by the MAINTAINED BY USER option, which means that you can continue to load, insert, update, or delete data. You can also use the REFRESH TABLE statement to populate the table. REFRESH
Chapter 6. Altering your database design
85
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
DEFERRED indicates that the data in the table is the result of your most recent update or, if more recent, the result of a REFRESH TABLE statement. The REFRESH TABLE statement deletes all the rows in a materialized query table, executes the fullselect in its definition, inserts the result into the table, and updates the catalog with the refresh timestamp and cardinality of the table. For more information about the REFRESH TABLE statement, see Chapter 5 of DB2 SQL Reference.
After you issue this statement, DB2 can no longer use the table for query optimization, and you cannot populate the table by using the REFRESH TABLE statement.
To change back to automatic query rewrite, use the ENABLE QUERY OPTIMIZATION option. v You can switch between system-maintained and user-maintained. By default, a materialized query table is system-maintained; the only way you can change the data is by using the REFRESH TABLE statement. To change to a user-maintained materialized query table, issue the following statement:
ALTER TABLE TRANSCOUNT SET MAINTAINED BY USER;
To change back to a system-maintained materialized query table, use the MAINTAINED BY SYSTEM option.
86
Administration Guide
v Assign a new validation routine to the table using the VALIDPROC clause. (Only one validation routine can be connected to a table at a time; so if a validation routine already exists, DB2 disconnects the old one and connects the new routine.) Rows that existed before the connection of a new validation routine are not validated. In this example, the previous validation routine is disconnected and a new routine is connected with the program name EMPLNEWE:
ALTER TABLE DSN8810.EMP VALIDPROC EMPLNEWE;
To ensure that the rows of a table conform to a new validation routine, you must run the validation routine against the old rows. One way to accomplish this is to use the REORG and LOAD utilities as shown in the following steps: 1. Use REORG to reorganize the table space that contains the table with the new validation routine. Specify UNLOAD ONLY, as in this example:
REORG TABLESPACE DSN8D81A.DSN8S81E UNLOAD ONLY
This step creates a data set that is used as input to the LOAD utility. 2. Run LOAD with the REPLACE option, and specify a discard data set to hold any invalid records. For example:
LOAD INTO TABLE DSN8810.EMP REPLACE FORMAT UNLOAD DISCARDDN SYSDISC
The EMPLNEWE validation routine validates all rows after the LOAD step has completed. DB2 copies any invalid rows into the SYSDISC data set.
87
2. 3. 4. 5. 6.
Modify the code of the edit procedure or the field procedure. After the unload operation is completed, stop DB2. Link-edit the modified procedure, using its original name. Start DB2. Use the LOAD utility to reload the data. LOAD then uses the modified procedure or field procedure to encode the data.
To change an edit procedure or a field procedure for a table space in which the maximum record length is greater than 32 KB, use the DSNTIAUL sample program to unload the data.
| | | |
88
Administration Guide
Be very careful about dropping a table; in most cases, recovering a dropped table is nearly impossible. If you decide to drop a table, remember that such changes might invalidate a plan or a package. You must alter tables that have been created with RESTRICT ON DROP to remove the restriction before you can drop them. 3. Commit the changes. 4. Re-create the table. If the table has an identity column: v Choose carefully the new value for the START WITH attribute of the identity column in the CREATE TABLE statement if you want the first generated value for the identity column of the new table to resume the sequence after the last generated value for the table that was saved by the unload in step 1. v Define the identity column as GENERATED BY DEFAULT so that the previously generated identity values can be reloaded into the new table. 5. Reload the table.
The statement deletes the row in the SYSIBM.SYSTABLES catalog table that contains information about DSN8810.PROJ. It also drops any other objects that depend on the project table. As a result: v The column names of the table are dropped from SYSIBM.SYSCOLUMNS. v If the dropped table has an identity column, the sequence attributes of the identity column are removed from SYSIBM.SYSSEQUENCES. v If triggers are defined on the table, they are dropped, and the corresponding rows are removed from SYSIBM.SYSTRIGGERS and SYSIBM.SYSPACKAGES. v Any views based on the table are dropped. v Application plans or packages that involve the use of the table are invalidated. v Cached dynamic statements that involve the use of the table are removed from the cache. v Synonyms for the table are dropped from SYSIBM.SYSSYNONYMS. v Indexes created on any columns of the table are dropped. v Referential constraints that involve the table are dropped. In this case, the project table is no longer a dependent of the department and employee tables, nor is it a parent of the project activity table. v Authorization information that is kept in the DB2 catalog authorization tables is updated to reflect the dropping of the table. Users who were previously authorized to use the table, or views on it, no longer have those privileges, because catalog rows are deleted. v Access path statistics and space statistics for the table are deleted from the catalog. v The storage space of the dropped table might be reclaimed. If the table space containing the table is: Implicitly created (using CREATE TABLE without the TABLESPACE clause), the table space is also dropped. If the data sets are in a storage group, dropping the table space reclaims the space. For user-managed data sets, you must reclaim the space yourself.
Chapter 6. Altering your database design
89
Partitioned, or contains only the one table, you can drop the table space. Segmented, DB2 reclaims the space. Simple, and contains other tables, you must run the REORG utility to reclaim the space. v If the table contains a LOB column, the auxiliary table and the index on the auxiliary table are dropped. The LOB table space is dropped if it was created with SQLRULES(STD). See DB2 SQL Reference for details. If a table has a partitioning index, you must drop the table space or use LOAD REPLACE when loading the redefined table. If the CREATE TABLE that is used to redefine the table creates a table space implicitly, commit the DROP statement before re-creating a table by the same name. You must also commit the DROP statement before you create any new indexes with the same name as the original indexes.
Finding dependent packages: The next example lists the packages, identified by the package name, collection ID, and consistency token (in hexadecimal representation), that are affected if you drop the project table:
SELECT DNAME, DCOLLID, HEX(DCONTOKEN) FROM SYSIBM.SYSPACKDEP WHERE BNAME = 'PROJ' AND BQUALIFIER = 'DSN8810' AND BTYPE = 'T';
Finding dependent plans: The next example lists the plans, identified by plan name, that are affected if you drop the project table:
SELECT DNAME FROM SYSIBM.SYSPLANDEP WHERE BNAME = 'PROJ' AND BCREATOR = 'DSN8810' AND BTYPE = 'T';
Finding other dependencies: In addition, the SYSIBM.SYSINDEXES table tells you what indexes currently exist on a table. From the SYSIBM.SYSTABAUTH table, you can determine which users are authorized to use the table. For more information about what you can retrieve from the DB2 catalog tables, see Appendix G of DB2 SQL Reference.
Re-creating a table
| To re-create a DB2 table to decrease the length attribute of a string column or the precision of a numeric column, follow these steps:
90
Administration Guide
1. If you do not have the original CREATE TABLE statement and all authorization statements for the table (call it T1), query the catalog to determine its description, the description of all indexes and views on it, and all users with privileges on it. 2. Create a new table (call it T2) with the desired attributes. 3. Copy the data from the old table T1 into the new table T2 by using one of the following methods: v Execute the following INSERT statement:
INSERT INTO T2 SELECT * FROM T1;
v Load data from your old table into the new table by using the INCURSOR option of the LOAD utility. This option uses the DB2 UDB family cross-loader function. 4. Execute the statement DROP TABLE T1. If T1 is the only table in an explicitly created table space, and you do not mind losing the compression dictionary, if one exists, drop the table space instead, so that the space is reclaimed. 5. Commit the DROP statement. 6. Use the statement RENAME TABLE to rename table T2 to T1. 7. Run the REORG utility on the table space that contains table T1. 8. Notify users to re-create any synonyms, indexes, views, and authorizations they had on T1. If you want to change a data type from string to numeric or from numeric to string (for example, INTEGER to CHAR or CHAR to INTEGER), use the CHAR and DECIMAL scalar functions in the SELECT statement to do the conversion. Another alternative is to use the following method: 1. Use UNLOAD or REORG UNLOAD EXTERNAL (if the data to unload in less than 32 KB) to save the data in a sequential file, and then 2. Use the LOAD utility to repopulate the table after re-creating it. When you reload the table, make sure you edit the LOAD statement to match the new column definition. This method is particularly appealing when you are trying to re-create a large table.
91
8. Reload the table using data from the SYSRECnn data set and the control statements from the SYSPUNCH data set, which was created when the table was unloaded.
Altering indexes
You can add a new column to an index or change most of the index clauses for an existing index with the ALTER INDEX statement, including BUFFERPOOL, CLOSE, COPY, PIECESIZE, PADDED or NOT PADDED, CLUSTER or NOT CLUSTER, and those clauses that are associated with storage space, free space, and group buffer pool caching. | | | | | | | | | | With the ALTER INDEX statement, you can: v Add a new column to an index; see Adding a new column to an index. v Alter the PADDED or NOT PADDED attribute to change how varying-length columns are stored in the index; see Altering how varying-length index columns are stored on page 93. v Alter the CLUSTER or NOT CLUSTER attribute to change how data is stored; see Altering the clustering index on page 93. v Change the limit key for index-controlled partitioning to rebalance data among the partitions in a partitioned table space; see Rebalancing data in partitioned table spaces on page 93. For other changes, you must drop and re-create the index as described in Dropping and redefining an index on page 94. | | | When you add a new column to an index, change how varying-length columns are stored in the index, or change the data type of a column in the index, DB2 creates a new version of the index, as described in Index versions on page 94. The ALTER INDEX statement can be embedded in an application program or issued interactively. For details on the ALTER INDEX statement, see Chapter 5 of DB2 SQL Reference. | | | | | | | | | | | | | | | | | |
92
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
To add a ZIPCODE column to the table and the index, issue the following statements:
ALTER TABLE TRANS ADD COLUMN ZIPCODE CHAR(5); ALTER INDEX STATE_IX ADD COLUMN (ZIPCODE); COMMIT;
Because the ALTER TABLE and ALTER INDEX statements are executed within the same unit of work, DB2 can use the new index with the key STATE, ZIPCODE immediately for data access. Restriction: You cannot add a column to an index that enforces a primary key, unique key, or referential constraint.
93
partitions. The result is that data is balanced according to your specifications. You can rebalance data by changing the limit key values of all or most of the partitions. The limit key is the highest value of the index key for a partition. You roll the changes through the partitions one or more at a time, making relatively small parts of the data unavailable at any given time. For more information about rebalancing data for index-controlled partitioned table spaces by using the ALTER INDEX statement, see The Official Introduction to DB2 UDB for z/OS. | | | | In addition, for index-controlled and table-controlled partitioned table spaces, you can use the REBALANCE option of the REORG TABLESPACE utility to shift data among the partitions. When you use REORG TABLESPACE with REBALANCE, DB2 automatically changes the limit key values for the partitioned table space.
Index versions
DB2 creates at least one index version each time you commit one of the following schema changes: v Change the data type of a non-numeric column that is contained in one or more indexes by using the ALTER TABLE statement DB2 creates a new index version for each index that is affected by this operation. v Change the length of a VARCHAR column that is contained in one or more PADDED indexes by using the ALTER TABLE statement DB2 creates a new index version for each index that is affected by this operation. v Add a column to an index by using the ALTER INDEX statement DB2 creates one new index version, because only one index is affected by this operation. Exceptions: DB2 does not create an index version under the following circumstances: v When the index was created with DEFINE NO v When you extend the length of a varying character (varchar data type) or varying graphic (vargraphic data type) column that is contained in one or more NOT PADDED indexes v When you specify the same data type and length that a columnwhich is contained in one or more indexescurrently has, such that its definition does not actually change
94
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
DB2 creates only one index version if, in the same unit of work, you make multiple schema changes to columns contained in the same index. If you make these same schema changes in separate units of work, each change results in a new index version. Reorganizing indexes: DB2 uses index versions to maximize data availability. Index versions enable DB2 to keep track of schema changes and simultaneously, provide users with access to data in altered columns that are contained in one or more indexes. When users retrieve rows from a table with an altered column, the data is displayed in the format that is described by the most recent schema definition, even though the data is not currently stored in this format. The most recent schema definition is associated with the current index version. Although data availability is maximized by the use of index versions, performance might suffer because DB2 does not automatically reformat the data in the index to conform to the most recent schema definition. DB2 defers any reformatting of existing data until you reorganize the index and apply the schema changes with the REORG INDEX utility. The more ALTER statements (which affect indexes) that you commit between reorganizations, the more index versions DB2 must track, and the more performance can suffer. Recommendation: Run the REORG INDEX utility as soon as possible after a schema change that affects an index to correct any performance degradation that might occur and to keep performance at its highest level. Recycling index version numbers: DB2 can store up to 16 index versions, numbered sequentially from 0 to 15. (The next consecutive version number after 15 is 1. Version number 0 is never reused; it is reserved for the original version of the index.) The versions that are associated with schema changes that have not been applied yet are considered to be in use, and the range of used versions is stored in the catalog. In-use versions can be recovered from image copies of the table space, if necessary. To determine the range of version numbers currently in use for an index, query the OLDEST_VERSION and CURRENT_VERSION columns of the SYSIBM.SYSINDEXES catalog table. To prevent DB2 from running out of index version numbers (and to prevent subsequent ALTER statements from failing), you must recycle unused index version numbers regularly: v For indexes defined as COPY YES, run the MODIFY RECOVERY utility. If all reusable version numbers (1 to 15) are currently in use, you must reorganize the index by running REORG INDEX before you can recycle the version numbers. v For indexes defined as COPY NO, run the REORG TABLESPACE, REORG INDEX, LOAD REPLACE, or REBUILD INDEX utility. These utilities recycle the version numbers in the process of performing their primary functions. Version numbers are considered to be unused if the schema changes that are associated with them have been applied and there are no image copies that contain data at those versions. For more information about managing index version numbers, see DB2 Utility Guide and Reference.
95
Altering views
In many cases, changing user requirements can be satisfied by modifying an existing view. But no ALTER VIEW statement exists; the only way to change a view is by dropping the view, committing the drop, and re-creating the view. When you drop a view, DB2 also drops the dependent views. When you drop a view, DB2 invalidates application plans and packages that are dependent on the view and revokes the privileges of users who are authorized to use it. DB2 attempts to rebind the package or plan the next time it is executed, and you receive an error if you do not re-create the view. To tell how much rebinding and reauthorizing is needed if you drop a view, check the catalog tables in Table 21.
Table 21. Catalog tables to check after dropping a view Catalog table SYSIBM.SYSPLANDEP SYSIBM.SYSPACKDEP SYSIBM.SYSVIEWDEP SYSIBM.SYSTABAUTH What to check Application plans dependent on the view Packages dependent on the view Views dependent on the view Users authorized to use the view
For more information about defining and dropping views, see Chapter 5 of DB2 SQL Reference.
If SYSPROC.MYPROC is defined with SECURITY DEFINER, the external security environment for the stored procedure uses the authorization ID of the owner of the stored procedure. This example changes the procedure to use the authorization ID of the person running it:
ALTER PROCEDURE SYSPROC.MYPROC SECURITY USER;
96
Administration Guide
This example changes the second function when any arguments are null:
ALTER FUNCTION SMITH.CENTER (CHAR(25), DEC(5,2), INTEGER) RETURNS ON NULL CALL;
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The partitioning index is the clustering index, and the data rows in the table are in order by the transaction date. The partitioning index controls the partitioning of the data in the table space. v A nonpartitioning index is defined on the customer account ID:
CREATE INDEX IX2 ON TRANS(ACCTID);
DB2 usually accesses the transaction table through the customer account ID by using the nonpartitioning index IX2. The partitioning index IX1 is not used for data access and is wasting space. In addition, you have a critical requirement for availability on the table, and you want to be able to run an online REORG job at the partition level with minimal disruption to data availability. To save space and to facilitate reorganization of the table space, you can drop the partitioning index IX1, and you can replace the access index IX2 with a partitioned clustering index that matches the 13 data partitions in the table. Issue the following statements:
DROP INDEX IX1; CREATE INDEX IX3 ON TRANS(ACCTID) PARTITIONED CLUSTER;
97
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
What happens: v When you drop the partitioning index IX1, DB2 converts the table space from index-controlled partitioning to table-controlled partitioning. DB2 changes the high limit key value that was originally specified to the highest value for the key column. v When you create the index IX3, DB2 creates a partitioned index with 13 partitions that match the 13 data partitions in the table. Each index partition contains the account numbers for the transactions during that month, and those account numbers are ordered within each partition. For example, partition 11 of the index matches the table partition that contains the transactions for November, 2002, and it contains the ordered account numbers of those transactions. v You drop the nonpartitioning index IX2 because it has been replaced by IX3. You can now run an online REORG at the partition level with minimal impact on availability. For example:
REORG TABLESPACE dbname.tsname PART 11 SHRLEVEL CHANGE
Running this utility reorganizes the data for partition 11 of dbname.tsname. The data rows are ordered within each partition to match the ordering of the clustering index. Recommendation: v Drop a partitioning index if it is used only to define partitions. When you drop a partitioning index, DB2 automatically converts the associated index-controlled partitioned table space to a table-controlled partitioned table space. v You can create a data-partitioned secondary index (DPSI) as the clustering index so that the data rows are ordered within each partition of the table space to match the ordering of the keys of the DPSI. v Create any new tables in a partitioned table space by using the PARTITION BY clause and the PARTITION ENDING AT clause in the CREATE TABLE statement to specify the partitioning key and the limit key values.
98
Administration Guide
v Changing the qualifier for system data sets, which includes the DB2 catalog, directory, active and archive logs, and the BSDS v Changing qualifiers for other databases and user data sets on page 102, which includes the work file database (DSNDB07), the default database (DSNDB04), and other DB2 databases and user databases To concentrate on DB2-related issues, this procedure assumes that the catalog alias resides in the same user catalog as the one that is currently used. If the new catalog alias resides in a different user catalog, see DFSMS/MVS: Access Method Services for the Integrated Catalog for information about planning such a move. If the data sets are managed by the Storage Management Subsystem (SMS), make sure that automatic class selection routines are in place for the new data set name.
See DFSMS/MVS: Access Method Services for the Integrated Catalog for more information.
OUTPUT MEMBER NAME CATALOG ALIAS COPY 1 NAME and COPY 2 NAME COPY 1 PREFIX and COPY 2 PREFIX These are the bootstrap data set names. These fields appear for both active and archive log prefixes.
99
Table 22. CLIST panels and fields to change to reflect new qualifier (continued) Panel name DSNTIPT Field name SAMPLE LIBRARY Comments This field allows you to specify a field name for edited output of the installation CLIST. Avoid overlaying existing data sets by changing the middle node, NEW, to something else. The only members you use in this procedure are xxxxMSTR and DSNTIJUZ in the sample library. Change this value only if you want to preserve the existing member through the CLIST.
DSNTIPO
PARAMETER MODULE
# # # # # # # # # # # # #
The output from the CLIST is a new set of tailored JCL with new cataloged procedures and a DSNTIJUZ job, which produces a new member. 3. Run the first two job steps of DSNTIJUZ to update the subsystem parameter load module. Unless you have specified a new name for the load module, make sure the output load module does not go to the SDSNEXIT or SDSNLOAD library used by the active DB2 subsystem. If you are changing the subsystem ID in addition to the system data set name qualifier, you should also run job steps DSNTIZP and DSNTIZQ to update the DSNHDECP module (zparm parameter SSID). Make sure that the updated DSNHDECP module does not go to the SDSNEXIT or SDSNLOAD library used by the active DB2 subsystem. Use caution when changing the subsystem ID. For more information, see the heading MVS PARMLIB updates panel: DSNTIPM for the discussion of panel DSNTIPM for PARMLIB members where the subsystem ID has to be changed. Do not run the remaining steps of DSNTIJUZ.
This command allows DB2 to complete processing currently executing programs. 2. Enter the following command:
-START DB2 ACCESS(MAINT)
3. Use the following commands to make sure the subsystem is in a consistent state. See Part 4, Operation and recovery, on page 311 and Chapter 2 of DB2 Command Reference for more information about these commands. -DISPLAY THREAD(*) TYPE(*) -DISPLAY UTILITY (*) -TERM UTILITY(*) -DISPLAY DATABASE(*) RESTRICT -DISPLAY DATABASE(*) SPACENAM(*) RESTRICT -RECOVER INDOUBT Correct any problems before continuing. 4. Stop DB2, using the following command:
-STOP DB2 MODE(QUIESCE)
100
Administration Guide
5. Run the print log map utility (DSNJU004) to identify the current active log data set and the last checkpoint RBA. For information about the print log map utility, see Part 3 of DB2 Utility Guide and Reference. 6. Run DSN1LOGP with the SUMMARY (YES) option, using the last checkpoint RBA from the output of the print log map utility you ran in the previous step. For information about DSN1LOGP, see Part 3 of DB2 Utility Guide and Reference. The report headed DSN1157I RESTART SUMMARY identifies active units of recovery or pending writes. If either situation exists, do not attempt to continue. Start DB2 with ACCESS(MAINT), use the necessary commands to correct the problem, and repeat steps 4 through 6 until all activity is complete.
2. Using IDCAMS, change the active log names. Active log data sets are named oldcat.LOGCOPY1.COPY01 for the cluster component and oldcat.LOGCOPY1.COPY01.DATA for the data component.
ALTER oldcat.LOGCOPY1.* NEWNAME (newcat.LOGCOPY1.*) ALTER oldcat.LOGCOPY1.*.DATA NEWNAME (newcat.LOGCOPY1.*.DATA) ALTER oldcat.LOGCOPY2.* NEWNAME (newcat.LOGCOPY2.*) ALTER oldcat.LOGCOPY2.*.DATA NEWNAME (newcat.LOGCOPY2.*.DATA)
101
During startup, DB2 compares the newcat value with the value in the system parameter load module, and they must be the same. 2. Using the IDCAMS REPRO command, replace the contents of BSDS2 with the contents of BSDS01. 3. Run the print log map utility (DSNJU004) to verify your changes to the BSDS. 4. At a convenient time, change the DD statements for the BSDS in any of your offline utilities to use the new qualifier.
Step 6: Start DB2 with the new xxxxMSTR and load module
Use the START DB2 command with the new load module name as shown here:
-START DB2 PARM(new_name)
If you stopped DSNDB01 or DSNDB06 in Step 2: Stop DB2 with no outstanding activity on page 100, you must explicitly start them in this step.
102
Administration Guide
v v v v v
DSNDB04 (default database) DSNDDF (communications database) DSNRLST (resource limit facility database) DSNRGFDB (the database for data definition control) Any other application databases that use the old high-level qualifier
At this point, the DB2 catalog tables SYSSTOGROUP, SYSTABLEPART, and SYSINDEXPART contain information about the old integrated user catalog alias. To update those tables with the new alias, you must use the following procedures. Until you do so, the underlying resources are not available. The following procedures are described separately: v Changing your work database to use the new high-level qualifier v Changing user-managed objects to use the new qualifier on page 104 v Changing DB2-managed objects to use the new qualifier on page 104 Table spaces and indexes that span more than one data set require special procedures. Partitioned table spaces can have different partitions allocated to different DB2 storage groups. Nonpartitioned table spaces or indexes only have the additional data sets to rename (those with the lowest level name of A002, A003, and so on).
4. Define the clusters, using the following access method services commands. Also, be sure to specify the instance qualifier of your data set, y, which can be either I or J:
ALTER oldcat.DSNDBC.DSNDB07.DSN4K01.y0001.A001 NEWNAME newcat.DSNDBC.DSNDB07.DSN4K01.y0001.A001 ALTER oldcat.DSNDBC.DSNDB07.DSN32K01.y0001.A001 NEWNAME newcat.DSNDBC.DSNDB07.DSN32K01.y0001.A001
Repeat the preceding statements (with the appropriate table space name) for as many table spaces as you use. 5. Create the table spaces in DSNDB07.
103
CREATE TABLESPACE DSN4K01 IN DSNDB07 BUFFERPOOL BP0 CLOSE NO USING VCAT DSNC810; CREATE TABLESPACE DSN32K01 IN DSNDB07 BUFFERPOOL BP32K CLOSE NO USING VCAT DSNC810;
2. Use the following SQL ALTER TABLESPACE and ALTER INDEX statements with the USING clause to specify the new qualifier:
ALTER TABLESPACE dbname.tsname USING VCAT newcat; ALTER INDEX creator.index-name USING VCAT newcat;
Repeat this step for all the objects in the database. 3. Using IDCAMS, rename the data sets to the new qualifier. Also, be sure to specify the instance qualifier of your data set, y, which can be either I or J:
ALTER oldcat.DSNDBC.dbname.*.y0001.A001 NEWNAME newcat.DSNDBC.dbname.*.y0001.A001 ALTER oldcat.DSNDBD.dbname.*.y0001.A001 NEWNAME newcat.DSNDBD.dbname.*.y0001.A001
4. Start the table spaces and index spaces, using the following command:
-START DATABASE(dbname) SPACENAM(*)
6. Using SQL, verify that you can access the data. Renaming the data sets can be done while DB2 is down. They are included here because the names must be generated for each database, table space, and index space that is to change.
# # # #
Note: Some databases (such as DSNRTSDB) will have to be explicitly stopped to allow any alterations. For these, use the following command:
-STOP DATABASE(dbname)
104
Administration Guide
b. Convert to user-managed data sets with the USING VCAT clause of the SQL ALTER TABLESPACE and ALTER INDEX statements, as shown in the following statements. Use the new catalog name for VCAT.
ALTER TABLESPACE dbname.tsname USING VCAT newcat; ALTER INDEX creator.index-name USING VCAT newcat;
The DROP succeeds only if all the objects that referenced this STOGROUP are dropped or converted to user-managed (USING VCAT clause). 3. Re-create the storage group using the correct volumes and the new alias, using the following statement:
CREATE STOGROUP stogroup-name VOLUMES (VOL1,VOL2) VCAT newcat;
4. Using IDCAMS, rename the data sets for the index spaces and table spaces to use the new high-level qualifier. Also, be sure to specify the instance qualifier of your data set, y, which can be either I or J:
ALTER oldcat.DSNDBC.dbname.*.y0001.A001 NEWNAME newcat.DSNDBC.dbname.*.y0001.A001 ALTER oldcat.DSNDBD.dbname.*.y0001.A001 NEWNAME newcat.DSNDBD.dbname.*.y0001.A001
If your table space or index space spans more than one data set, be sure to rename those data sets also. 5. Convert the data sets back to DB2-managed data sets by using the new DB2 storage group. Use the following SQL ALTER TABLESPACE and ALTER INDEX statements:
ALTER TABLESPACE dbname.tsname USING STOGROUP stogroup-name PRIQTY priqty SECQTY secqty; ALTER INDEX creator.index-name USING STOGROUP stogroup-name PRIQTY priqty SECQTY secqty;
If you specify USING STOGROUP without specifying the PRIQTY and SECQTY clauses, DB2 uses the default values. For more information about USING STOGROUP, see DB2 SQL Reference. 6. Start each database, using the following command:
-START DATABASE(dbname) SPACENAM(*)
105
v Copying a relational database on page 109 describes copying a user-managed relational database, with its object definitions and its data, from one DB2 subsystem to another, on the same or different z/OS system. v Copying an entire DB2 subsystem on page 109 describes copying a DB2 subsystem from one z/OS system to another. Copying a subsystem includes these items: All the user data and object definitions The DB2 system data sets: - The log - The bootstrap data set - Image copy data sets - The DB2 catalog - The integrated catalog that records all the DB2 data sets
| |
106
Administration Guide
| |
the version information in the catalog and directory for the target table space or index. For instructions, see Part 3 of DB2 Utility Guide and Reference. You might also want to use the following tools to move DB2 data: v The DB2 DataPropagator is a licensed program that can extract data from DB2 tables, DL/I databases, VSAM files, and sequential files. For instructions, see Loading data from DL/I on page 65. v DFSMS, which contains the following functional components: Data Set Services (DFSMSdss) Use DFSMSdss to copy data between disk devices. For instructions, see Data Facility Data Set Services: User's Guide and Reference. You can use online panels to control this, through the Interactive Storage Management Facility (ISMF) that is available with DFSMS; for instructions, refer to z/OS DFSMSdfp Storage Administration Reference. Data Facility Product (DFSMSdfp) This is a prerequisite for DB2. You can use access method services EXPORT and IMPORT commands with DB2 data sets when control interval processing (CIMODE) is used. For instructions on EXPORT and IMPORT, see DFSMS/MVS: Access Method Services for the Integrated Catalog. Hierarchical Storage Manager (DFSMShsm) With the MIGRATE, HMIGRATE, or HRECALL commands, which can specify specific data set names, you can move data sets from one disk device type to another within the same DB2 subsystem. Do not migrate the DB2 directory, DB2 catalog, and the work file database (DSNDB07). Do not migrate any data sets that are in use frequently, such as the bootstrap data set and the active log. With the MIGRATE VOLUME command, you can move an entire disk volume from one device type to another. The program can be controlled using online panels, through the Interactive Storage Management Facility (ISMF). For instructions, see z/OS DFSMShsm Managing Your Own Data. Table 23 shows which tools are applicable to which operations.
Table 23. Tools applicable to data-moving operations Tool REORG and LOAD COPY and RECOVER DSNTIAUL DSN1COPY DataRefresher or DXT DFSMSdss DFSMSdfp DFSMShsm
Moving a data set Yes Yes Yes Yes Yes Yes Yes Yes
Some of the listed tools rebuild the table space and index space data sets, and they therefore generally require longer to execute than the tools that merely copy them. The tools that rebuild are REORG and LOAD, RECOVER and REBUILD, DSNTIAUL, and DataRefresher. The tools that merely copy data sets are DSN1COPY, DFSMSdss, DFSMSdfp EXPORT and IMPORT, and DFSMShsm.
107
DSN1COPY is fairly efficient in use, but somewhat complex to set up. It requires a separate job step to allocate the target data sets, one job step for each data set to copy the data, and a step to delete or rename the source data sets. DFSMSdss, DFSMSdfp, and DFSMShsm all simplify the job setup significantly. Although less efficient in execution, RECOVER is easy to set up if image copies and recover jobs already exist. You might only need to redefine the data sets involved and recover the objects as usual.
2. Prevent access to the data sets you are going to move, by entering the following command:
-STOP DATABASE(dbname) SPACENAM(*)
3. Enter the ALTER TABLESPACE and ALTER INDEX SQL statements to use the new storage group name, as shown in the following statements:
ALTER TABLESPACE dbname.tsname USING STOGROUP stogroup-name; ALTER INDEX creator.index-name USING STOGROUP stogroup-name;
4. Using IDCAMS, rename the data sets for the index spaces and table spaces to use the new high-level qualifier. Also, be sure to specify the instance qualifier of your data set, y, which can be either I or J. If you have run REORG with SHRLEVEL CHANGE or SHRLEVEL REFERENCE on any table spaces or index spaces, the fifth-level qualifier might be J0001.
108
Administration Guide
5. Start the database for utility processing only, using the following command:
-START DATABASE(dbname) SPACENAM(*) ACCESS(UT)
6. Use the REORG or the RECOVER utility on the table space or index space, or use the REBUILD utility on the index space. 7. Start the database, using the following command:
-START DATABASE(dbname) SPACENAM(*)
109
110
Administration Guide
111
Data set excess: Allows for unused space within allocated data sets, occurring as unused tracks or part of a track at the end of any data set. The amount of unused space depends upon the volatility of the data, the amount of space management done, and the size of the data set. Generally, large data sets can be managed more closely, and those that do not change in size are easier to manage. The factor can range without limit above 1.02. A typical value is 1.10. Indexes: Allows for storage for indexes to data. For data with no indexes, the factor is 1.0. For a single index on a short column, the factor is 1.01. If every column is indexed, the factor can be greater than 2.0. A typical value is 1.20. For further discussion of the factor, see Calculating the space required for an index on page 117. Table 25 shows calculations of the multiplier M for three different database designs: v The tight design is carefully chosen to save space and allows only one index on a single, short field. v The loose design allows a large value for every factor, but still well short of the maximum. Free space adds 30% to the estimate, and indexes add 40%. v The medium design has values between the other two. You might want to use these values in an early stage of database design. In each design, the device type is assumed to be a 3390. Therefore, the unusable-space factor is 1.15. M is always the product of the five factors.
Table 25. Calculations for different database designs Factor Record overhead Free space Unusable space Data set excess Indexes = Multiplier M Tight design 1.02 1.00 1.15 1.02 1.02 1.22 Medium design 1.10 1.05 1.15 1.10 1.20 1.75 Loose design 1.30 1.30 1.15 1.30 1.40 3.54
In addition to the space for your data, external storage devices are required for: v Image copies of data sets, which can be on tape v System libraries, system databases, and the system log v Temporary work files for utility and sort jobs A rough estimate of the additional external storage needed is three times the amount calculated for disk storage.
112
Administration Guide
113
Table 26. Maximum record size (in bytes) EDITPROC NO YES 4-KB page 4056 4046 8-KB page 8138 8128 16-KB page 16330 16320 32-KB page 32714 32704
Creating a table using CREATE TABLE LIKE in a table space of a larger page size changes the specification of LONG VARCHAR to VARCHAR and LONG VARGRAPHIC to VARGRAPHIC. You can also use CREATE TABLE LIKE to create a table with a smaller page size in a table space if the maximum record size is within the allowable record size of the new table space.
114
Administration Guide
v Let CEILING be the operation of rounding a real number up to the next highest integer. v Let number of records be the total number of records to be loaded. v Let average record size be the sum of the lengths of the fields in each record, using an average value for varying-length fields, and including the following amounts for overhead: 8 bytes for the total record 1 byte for each field that allows nulls 2 bytes for each varying-length field See the CREATE TABLE statement in Chapter 5 of DB2 SQL Reference for information about how many bytes are required for different column types. v Let percsave be the percentage of kilobytes saved by compression (as reported by the DSN1COMP utility in message DSN1940I) v Let compression ratio be percsave/100 Then calculate as follows: 1. Usable page size is the page size minus a number of bytes of overhead (that is, 4 KB 40 for 4 KB pages, 8 KB 54 for 8 KB pages, 16 KB 54 for 16 KB pages, or 32 KB 54 for 32 KB pages) multiplied by (100-p) / 100, where p is the value of PCTFREE. If your average record size is less than 16, then usable page size is 255 (maximum records per page) multiplied by average record size multiplied by (100-p) / 100. 2. Records per page is MIN(MAXROWS, FLOOR(usable page size / average record size)), but cannot exceed 255 and cannot exceed the value you specify for MAXROWS. 3. Pages used is 2+CEILING(number of records / records per page). 4. Total pages is FLOOR(pages used (1+fp ) / fp ), where fp is the (nonzero) value of FREEPAGE. If FREEPAGE is 0, then total pages is equal to pages used. (See Free space on page 111 for more information about FREEPAGE.) If you are using data compression, you need additional pages to store the dictionary. See Calculating the space required for a dictionary on page 116 to figure how many pages the dictionary requires. 5. Estimated number of kilobytes required for a table: v If you do not use data compression, the estimated number of kilobytes is total pages page size (4 KB, 8 KB, 16 KB, or 32 KB). v If you use data compression, the estimated number of kilobytes is total pages page size (4 KB, 8 KB, 16 KB, or 32 KB) (1 - compression ratio). For example, consider a table space containing a single table with the following characteristics: v Number of records = 100000 v Average record size = 80 bytes v Page size = 4 KB v PCTFREE = 5 (5% of space is left free on each page) v FREEPAGE = 20 (one page is left free for each 20 pages used) v MAXROWS = 255 If v v v v v the data is not compressed, you get the following results: Usable page size = 4056 0.95 = 3853 bytes Records per page = MIN(MAXROWS, FLOOR(3853 / 80)) = 48 Pages used = 2 + CEILING(100000 / 48) = 2085 Total pages = FLOOR(2085 21 / 20) = 2189 Estimated number of kilobytes = 2189 4 = 8756
Chapter 7. Estimating disk storage for user data
115
If the data is compressed, multiply the estimated number of kilobytes for an uncompressed table by (1 - compression ratio) for the estimated number of kilobytes required for the compressed table.
Disk requirements
This section helps you calculate the disk requirements for a dictionary associated with a compressed nonsegmented table space and for a dictionary associated with a compressed segmented table space. Nonsegmented table space: The dictionary contains 4096 entries in most cases. This means you need to allocate an additional sixteen 4-KB pages, eight 8-KB pages, four 16-KB pages, or two 32-KB pages. Although it is possible that your dictionary can contain fewer entries, allocate enough space to accommodate a dictionary with 4096 entries. For 32-KB pages, one segment (minimum of four pages) is sufficient to contain the dictionary. Use Table 27 to determine how many 4-KB pages, 8-KB pages, 16-KB pages, or 32-KB pages to allocate for the dictionary of a compressed nonsegmented table space.
Table 27. Pages required for the dictionary of a compressed nonsegmented table space Table space page size (KB) 4 8 16 32 Dictionary size (512 entries) 2 1 1 1 Dictionary size (1024 entries) 4 2 1 1 Dictionary size (2048 entries) 8 4 2 1 Dictionary size (4096 entries) 16 8 4 2 Dictionary size (8192 entries) 32 16 8 4
Segmented table space: The size of the dictionary depends on the size of your segments. Assuming 4096 entries is recommended. Use Table 28 to determine how many 4-KB pages to allocate for the dictionary of a compressed segmented table space.
Table 28. Pages required for the dictionary of a compressed segmented table space Segment size (4-KB pages) 4 8 Dictionary size (512 entries) 4 8 Dictionary size (1024 entries) 4 8 Dictionary size (2048 entries) 8 8 Dictionary size (4096 entries) 16 16 Dictionary size (8192 entries) 32 32
116
Administration Guide
Table 28. Pages required for the dictionary of a compressed segmented table space (continued) Segment size (4-KB pages) 12 16 Dictionary size (512 entries) 12 Segment size Dictionary size (1024 entries) 12 Segment size Dictionary size (2048 entries) 12 Segment size Dictionary size (4096 entries) 24 Segment size Dictionary size (8192 entries) 36 Segment size
117
Root Page Page A Level 2 Page B Highest key of page A Highest key of page B
Nonleaf Page A Page 1 Level 1 Page X Highest key of page X Page Z Highest key of page 1
Nonleaf Page B
Leaf Page X
Key
Record-ID
Key
Record-ID
Table Row
Row Row
If you insert data with a constantly increasing key, DB2 adds the new highest key to the top of a new page. Be aware, however, that DB2 treats nulls as the highest value. When the existing high key contains a null value in the first column that differentiates it from the new key that is inserted, the inserted nonnull index entries cannot take advantage of the highest-value split.
118
Administration Guide
varying-length column is the maximum length if the index is padded. Otherwise, if an index is not padded, estimate the length of a varying-length column to be the average length of the column data, and add a two-byte length field to the estimate. You can retrieve the value of the AVGKEYLEN column in the SYSIBM.SYSINDEXES catalog table to determine the average length of keys within an index. The following index calculations are intended only to help you estimate the storage required for an index. Because there is no way to predict the exact number of duplicate keys that can occur in an index, the results of these calculations are not absolute. It is possible, for example, that for a nonunique index, more index entries than the calculations indicate might fit on an index page. The calculations are divided into cases using a unique index and using a nonunique index. In the following calculations, let: v k = the length of the index key. v n = the average number of data records per distinct key value of a nonunique index. For example: a = number of data records per index b = number of distinct key values per index n=a/b v f = the value of PCTFREE. v p = the value of FREEPAGE. v r = record identifier (RID) length. Let r = 4 for indexes on nonlarge table spaces and r = 5 for indexes on large spaces (defined with DSSIZE greater than or equal to 4 GB) and on auxiliary tables. v FLOOR = the operation of discarding the decimal portion of a real number. v CEILING = the operation of rounding a real number up to the next highest integer. v MAX = the operation of selecting the highest integer value. Calculate pages for a unique index: Use the following calculations to estimate the number of leaf and nonleaf pages in a unique index. Calculate the total leaf pages: 1. Space per key k + r + 3 2. Usable space per page FLOOR((100 - f) 4038 / 100) 3. Entries per page FLOOR(usable space per page / space per key) 4. Total leaf pages CEILING(number of table rows / entries per page) Calculate the total nonleaf pages: 1. Space per key k + 7 2. Usable space per page FLOOR(MAX(90, (100 - f )) 4046/100) 3. Entries per page FLOOR(usable space per page / space per key) 4. Minimum child pages MAX(2, (entries per page + 1)) 5. Level 2 pages CEILING(total leaf pages / minimum child pages) 6. Level 3 pages CEILING(level 2 pages / minimum child pages) 7. Level x pages CEILING(previous level pages / minimum child pages) 8. Total nonleaf pages (level 2 pages + level 3 pages + ...+ level x pages until the number of level x pages = 1) Calculate pages for a nonunique index: Use the following calculations to estimate the number of leaf and nonleaf pages for a nonunique index. Calculate the total leaf pages: 1. Space per key 4 + k + (n (r+1))
Chapter 7. Estimating disk storage for user data
119
2. Usable space per page FLOOR((100 - f ) 4038 / 100) 3. Key entries per page n (usable space per page / space per key) 4. Remaining space per page usable space per page - (key entries per page / n) space per key 5. Data records per partial entry FLOOR((remaining space per page - (k + 4)) / 5) 6. Partial entries per page (n / CEILING(n / data records per partial entry)) if data records per partial entry >= 1, or 0 if data records per partial entry < 1 7. Entries per page MAX(1, (key entries per page + partial entries per page)) 8. Total leaf pages CEILING(number of table rows / entries per page) Calculate the total nonleaf pages: 1. Space per key k + r + 7 2. Usable space per page FLOOR (MAX(90, (100- f)) (4046 / 100) 3. Entries per page FLOOR((usable space per page / space per key) 4. Minimum child pages MAX(2, (entries per page + 1)) 5. Level 2 pages CEILING(total leaf pages / minimum child pages) 6. Level 3 pages CEILING(level 2 pages / minimum child pages) 7. Level x pages CEILING(previous level pages / minimum child pages) 8. Total nonleaf pages (level 2 pages + level 3 pages + ...+ level x pages until x = 1) Calculate the total space requirement: Finally, calculate the number of kilobytes required for an index built by LOAD. 1. Free pages FLOOR(total leaf pages / p), or 0 if p = 0 2. Space map pages CEILING((tree pages + free pages) / 8131) 3. Tree pages MAX(2, (total leaf pages + total nonleaf pages)) 4. Total index pages MAX(4, (1 + tree pages + free pages + space map pages)) 5. Total space requirement 4 (total index pages + 2) In the following example of the entire calculation, assume that an index is defined with these characteristics: v It is unique. v The table it indexes has 100000 rows. v The key is a single column defined as CHAR(10) NOT NULL. v The value of PCTFREE is 5. v The value of FREEPAGE is 4. The calculations are shown in Table 29.
Table 29. The total space requirement for an index Quantity Length of key Average number of duplicate keys PCTFREE FREEPAGE Calculate total leaf pages Space per key Usable space per page Entries per page Total leaf pages Calculation k n f p k + 7 FLOOR((100 f ) 4038/100) FLOOR(usable space per page / space per key) CEILING(number of table rows / entries per page) Result 10 1 5 4 17 3844 225 445
120
Administration Guide
Table 29. The total space requirement for an index (continued) Quantity Calculate total nonleaf pages Space per key Usable space per page Entries per page Minimum child pages Level 2 pages Level 3 pages Total nonleaf pages Calculate total space required Free pages Tree pages Space map pages Total index pages TOTAL SPACE REQUIRED, in KB Calculation k + 7 FLOOR(MAX(90, (100 f )) (4046/100) FLOOR(usable space per page / space per key) MAX(2, (entries per page + 1)) CEILING(total leaf pages / minimum child pages) CEILING(level 2 pages / minimum child pages) (level 2 pages + level 3 pages +...+ level x pages until x = 1) FLOOR(total leaf pages / p), or 0 if p = 0 MAX(2, (total leaf pages + total nonleaf pages)) CEILING((tree pages + free pages)/8131) MAX(4, (1 + tree pages + free pages + space map pages)) 4 (total index pages + 2) Result 17 3836 226 227 2 1 3 111 448 1 561 2252
121
122
Administration Guide
127 127 127 128 128 128 129 130 130 130 131 131 131 131 132 132 133 133 134 134 138 143 144 145 145 146 146 147 147 148 148 149 149 150 151 151 153 153 153 154 154 156 156
# #
Example of roles and authorizations for a routine . . . . . . . . . . . . . . Implementing the user-defined function . . Defining the user-defined function . . . . Using the user-defined function . . . . . How DB2 determines authorization IDs . . Which IDs can exercise which privileges . . . . Authorization for dynamic SQL statements . . Run behavior . . . . . . . . . . . Bind behavior . . . . . . . . . . . Define behavior . . . . . . . . . . Invoke behavior . . . . . . . . . . Common attribute values for bind, define, and invoke behavior . . . . . . . . . Example of determining authorization IDs for dynamic SQL statements in routines . . Simplifying authorization . . . . . . . Composite privileges . . . . . . . . . . Multiple actions in one statement. . . . . . Matching job titles with privileges . . . . . . Examples of granting and revoking privileges . . Examples using the GRANT statement . . . . # System administrator's privileges . . . . . Package administrator's privileges . . . . Database administrator's privileges . . . . Database controller's privileges . . . . . Examples with secondary IDs . . . . . . . Application programmers' privileges . . . Privileges for binding the plan . . . . . Moving PROGRAM1 into production . . . Spiffy Computer Companys approach to distributed data . . . . . . . . . . Examples using the REVOKE statement . . . # Privileges granted from two or more IDs . . Revoking privileges granted by other IDs Restricting revocation of privileges . . . . Other implications of the REVOKE statement Finding catalog information about privileges . . . Retrieving information in the catalog . . . . Retrieving all DB2 authorization IDs with granted privileges . . . . . . . . . . Retrieving multiple grants of the same authorization . . . . . . . . . . . Retrieving all IDs with DBADM authority Retrieving IDs authorized to access a table Retrieving IDs authorized to access a routine Retrieving the tables an ID is authorized to access . . . . . . . . . . . . . . Retrieving the plans and packages that access a table . . . . . . . . . . . . . . Creating views of the DB2 catalog tables . . . # | Multilevel security. . . . . . . . . . . . Introduction to multilevel security . . . . . | Users and objects in multilevel security. . . | Security labels . . . . . . . . . . . | Mandatory access checking . . . . . . . |
157 157 159 160 161 161 164 165 165 165 166 166 168 171 171 171 171 173 174 175 176 176 176 176 177 178 178 179 180 181 181 182 185 187 187 188 188 189 189 190 190 190 191 192 192 192 193 193
123
| Dominant, reverse dominant, equivalent, and disjoint security labels . . . . . . . . | Write-down control . . . . . . . . . | Implementing multilevel security with DB2 . . | Implementing multilevel security at the | object level . . . . . . . . . . . . | Implementing multilevel security with | row-level granularity . . . . . . . . . | Working with data in a multilevel-secure | environment. . . . . . . . . . . . . | Using the SELECT statement with multilevel | security . . . . . . . . . . . . . | Using the INSERT statement with multilevel | security . . . . . . . . . . . . . | Using the UPDATE statement with multilevel | security . . . . . . . . . . . . . | Using the DELETE statement with multilevel | security . . . . . . . . . . . . . | Using utilities with multilevel security . . . | Using views to restrict access . . . . . . | Global temporary tables with multilevel | security . . . . . . . . . . . . . | Materialized query tables with multilevel | security . . . . . . . . . . . . . | Constraints and multilevel security . . . . | Field procedures, edit procedures, validation | procedures, and multilevel security . . . . | Triggers and multilevel security . . . . . | Implementing multilevel security in a | distributed environment . . . . . . . . . | TCP/IP support for multilevel security . . . | SNA support for multilevel security . . . . | | Data encryption through built-in functions . . . Defining columns for encrypted data . . . . | Defining encryption at the column level . . . | Using column-level encryption with views | Using password hints with column-level | encryption . . . . . . . . . . . . | Defining encryption at the value level . . . . | Using password hints with value-level | encryption . . . . . . . . . . . . | Ensuring accurate predicate evaluation for | encrypted data . . . . . . . . . . . . | Encrypting non-character values . . . . . . | Performance recommendations for data # encryption . . . . . . . . . . . . . #
Chapter 10. Controlling access through a closed application . . . . . . . . . . . Registration tables . . . . . . . . . . . . Columns of the ART . . . . . . . . . . Columns of the ORT . . . . . . . . . . Controlling data definition . . . . . . . . . Installing data definition control support . . . Controlling data definition by application name Controlling data definition by application name with exceptions. . . . . . . . . . . . Controlling data definition by object name . . Controlling data definition by object name with exceptions . . . . . . . . . . . . . Registering sets of objects . . . . . . . .
194 195 195 196 197 199 199 200 201 203 204 205 205 205 206 206 206 206 206 207 207 208 208 209 210 210 211 211 211 212
Managing the registration tables and their indexes Creating the tables and indexes . . . . . Naming registration tables and indexes. . Dropping registration tables and indexes . Creating a table space for the ART and ORT Adding columns . . . . . . . . . . Updating the tables . . . . . . . . . Stopping data definition control . . . . .
215 216 216 217 218 219 220 221 222 224 225
Chapter 11. Controlling access to a DB2 subsystem . . . . . . . . . . . . . . Controlling local requests . . . . . . . . . Processing connections . . . . . . . . . . Steps in processing connections . . . . . . Supplying secondary IDs for connection requests . . . . . . . . . . . . . . Required CICS specifications . . . . . . . Processing sign-ons . . . . . . . . . . . Steps in processing sign-ons . . . . . . . Supplying secondary IDs for sign-on requests Controlling requests from remote applications . . Overview of security mechanisms for DRDA and SNA . . . . . . . . . . . . . . Mechanisms used by DB2 UDB for z/OS as a requester . . . . . . . . . . . . . Mechanisms accepted by DB2 UDB for z/OS as a server . . . . . . . . . . . . The communications database for the server . . Columns used in SYSIBM.LUNAMES . . . Columns used in SYSIBM.USERNAMES . . Controlling inbound connections that use SNA protocols . . . . . . . . . . . . . . Controlling what LUs can attach to the network . . . . . . . . . . . . . Verifying a partner LU . . . . . . . . Accepting a remote attachment request . . . Do you permit access? . . . . . . . . Do you manage inbound IDs through DB2 or RACF? . . . . . . . . . . . . . Do you trust the partner LU? . . . . . . If you use passwords, are they encrypted? If you use Kerberos, are users authenticated? Do you translate inbound IDs? . . . . . How do you associate inbound IDs with secondary IDs? . . . . . . . . . . . Controlling inbound connections that use TCP/IP protocols . . . . . . . . . . . Steps, tools, and decisions . . . . . . . Planning to send remote requests. . . . . . . The communications database for the requester Columns used in SYSIBM.LUNAMES . . . Columns used in SYSIBM.IPNAMES . . . Columns used in SYSIBM.USERNAMES . . Columns used in SYSIBM.LOCATIONS . . What IDs you send . . . . . . . . . . Translating outbound IDs . . . . . . . . Sending passwords . . . . . . . . . . Sending RACF encrypted passwords . . . Sending RACF PassTickets . . . . . . . Sending encrypted passwords from workstation clients . . . . . . . . .
231 232 232 233 234 235 236 236 237 238 238 238 239 240 240 241 242 242 242 242 243 243 244 244 244 247 249 249 250 252 253 253 254 255 256 257 260 262 263 263 263
124
Administration Guide
Establishing RACF protection for DB2 . . . . . Defining DB2 resources to RACF . . . . . . Define the names of protected access profiles Enable RACF checking for the DSNR and SERVER classes. . . . . . . . . . . Enable partner-LU verification. . . . . . Permitting RACF access . . . . . . . . . Define RACF user IDs for DB2 started tasks Add RACF groups . . . . . . . . . Permit access for users and groups . . . . Issuing DB2 commands . . . . . . . . . Establishing RACF protection for stored procedures . . . . . . . . . . . . . Step 1: Control access by using the attachment facilities (required). . . . . . Step 2: Control access to WLM (optional) . . Step 3: Control access to non-DB2 resources (optional). . . . . . . . . . . . . Establishing RACF protection for TCP/IP . . . Establishing Kerberos authentication through RACF . . . . . . . . . . . . . . . . Other methods of controlling access . . . . . . Chapter 12. Protecting data sets through RACF Adding groups to control DB2 data sets . . . . Creating generic profiles for data sets . . . . . Permitting DB2 authorization IDs to use the profiles . . . . . . . . . . . . . . . Allowing DB2 authorization IDs to create data sets
264 265 266 266 267 267 267 270 271 274 274 274 275 277 278 278 279 281 281 281 283 283 285 285 286 286 287 287 288 288 289 290 290 290 291 291 291 292 292 293 293 293 293 294 295 295 296 296 297 297
# # # # # # # #
Checking data consistency with the CHECK utility . . . . . . . . . . . . . . Checking data consistency with the DISPLAY DATABASE command . . . . . . . . Checking data consistency with the REPORT utility . . . . . . . . . . . . . . Checking data consistency with the operation log . . . . . . . . . . . . . . . Using internal integrity reports to check data consistency . . . . . . . . . . . .
# # # # # #
Chapter 13. Auditing . . . . . . . . . . Using the audit trace . . . . . . . . . . . Starting the audit trace . . . . . . . . . Stopping the audit trace . . . . . . . . . Audit class descriptions . . . . . . . . . Limitations of the audit trace . . . . . . . Auditing in a distributed data environment . . Auditing a specific table. . . . . . . . . The role of authorization IDs in auditing . . . . Auditing specific IDs . . . . . . . . . . Determining which IDs hold privileges and authorities . . . . . . . . . . . . . Using audit records . . . . . . . . . . . The contents of audit records . . . . . . . Formatting audit records . . . . . . . . Suggested audit trace reports . . . . . . . Using other sources of audit information . . . . Determining which security measures are enabled Ensuring data accuracy and consistency . . . . Ensuring that the required data is present . . . Ensuring that data is unique . . . . . . . Ensuring that data fits a pattern or value range Ensuring that data is consistent . . . . . . Ensuring that changes are tracked . . . . . Ensuring that concurrent users access consistent data . . . . . . . . . . . . . . . Checking for lost and incomplete transactions Determining whether data is consistent. . . . . Automatically checking the consistency of data # Submitting SQL queries to check data # consistency . . . . . . . . . . . . . #
Chapter 14. A sample security plan for employee data . . . . . . . . . . . . 301 Securing manager access . . . . . . . . . 301 Granting the SELECT privilege to managers . . 302 Securing distributed access . . . . . . . . 303 Actions at the central server location . . . 303 Actions at remote locations . . . . . . . 304 Auditing manager use . . . . . . . . . 304 Securing payroll operations access . . . . . . 305 Securing compensation updates . . . . . . 305 Additional controls for compensation updates 306 Granting privileges to payroll operations and payroll management . . . . . . . . . . 307 Auditing payroll operations and payroll management . . . . . . . . . . . . 307 Securing administrator, owner, and other access 308 Securing access by IDs with database administrator authority . . . . . . . . . 308 Securing access by IDs with system administrator authority . . . . . . . . . 308 Securing access by owners with implicit privileges on objects . . . . . . . . . . 309 Securing access by other users. . . . . . . 310
125
126
Administration Guide
127
6. Reread the sections that describe the functions that you expect to use. Ensure that you can achieve the objectives that you have set, or adjust your plan accordingly. | | | | | | | | | | | | | | | | | | |
128
Administration Guide
Primary ID
Secondary ID 1
Secondary ID n
SQL ID
DB2 data
One of the ways that DB2 controls access to data is through the use of identifiers (IDs). Four types of IDs exist: Primary authorization ID Generally, the primary authorization ID identifies a process. For example, statistics and performance trace records use a primary authorization ID to identify a process. Secondary authorization ID A secondary authorization ID, which is optional, can hold additional privileges that are available to the process. For example, a secondary authorization ID can be a Resource Access Control Facility (RACF) group ID. SQL ID An SQL ID holds the privileges that are exercised when certain dynamic SQL statements are issued. The SQL ID can be set equal to the primary ID or any of the secondary IDs. If an authorization ID of a process has SYSADM authority, the process can set its SQL ID to any authorization ID. | | | | # RACF ID The RACF ID is generally the source of the primary and secondary authorization IDs (RACF groups). When you use the RACF Access Control Module or multilevel security, the RACF ID is used directly.
129
Example: You might know that someone else is using your ID. However, DB2 recognizes only the ID; DB2 does not recognize that someone else is using your ID. Therefore, this book uses phrases like an ID owns an object instead of a person owns an object to discuss access control within DB2.
130
Administration Guide
| | | | | | | |
Multilevel security prevents unauthorized users from accessing information at a higher classification than their authorization, and prevents users from declassifying information. Using multilevel security with row-level granularity, you can define security for DB2 objects and perform security checks, including row-level security checks. Row-level security checks allow you to control which users have authorization to view, modify, or perform other actions on specific rows of data. For more information about multilevel security, see Multilevel security on page 192
131
132
Administration Guide
DB2 controls access to its objects by a set of privileges. Each privilege allows a specific action to be taken on some object. Figure 8 shows the four primary ways within DB2 to give an ID access to data.1
Data
| | |
As a security planner, you must be aware of every way to allow access to data. Before you write a security plan, see the following sections: v Explicit privileges and authorities v Implicit privileges of ownership on page 145 v Privileges exercised through a plan or a package on page 148 v Access control for user-defined functions and stored procedures on page 154 v Multilevel security on page 192 v Data encryption through built-in functions on page 207 DB2 has primary authorization IDs, secondary authorization IDs, and SQL IDs. Some privileges can be exercised only by one type of ID; other privileges can be exercised by more than one. To decide which IDs should hold specific privileges, see Which IDs can exercise which privileges on page 161. After you decide which IDs should hold specific privileges, you can implement a security plan. Before you begin your plan, you can see what others have done in Matching job titles with privileges on page 171 and Examples of granting and revoking privileges on page 173. The DB2 catalog records the privileges that IDs are granted and the objects that IDs own. To check the implementation of your security plan, see Finding catalog information about privileges on page 187.
1. Certain authorities are assigned when DB2 is installed, and can be reassigned by changing the subsystem parameter (DSNZPARM). You can consider changing the DSNZPARM value to be a fifth way of granting data access in DB2. Copyright IBM Corp. 1982, 2009
133
Privilege A privilege allows the capability to perform a specific operation, sometimes on a specific object. Explicit privilege An explicit privilege is a specific type of privilege. Each explicit privilege has a name and is the result of a GRANT statement or a REVOKE statement. For example, the SELECT privilege. Administrative authority An administrative authority is a set of privileges, often covering a related set of objects. Authorities often include privileges that are not explicit, have no name, and cannot be specifically granted. For example, when an ID is granted the SYSOPR administrative authority, the ID is implicitly granted the ability to terminate any utility job. Privileges and authorities are held by authorization IDs.
Authorization IDs
Every process that connects to or signs on to DB2 is represented by a set of one or more DB2 short identifiers that are called authorization IDs. Authorization IDs can be assigned to a process by default procedures or by user-written exit routines. Methods of assigning those IDs are described in detail in Chapter 11, Controlling access to a DB2 subsystem, on page 231; see especially Table 70 on page 233 and Table 71 on page 234. When authorization IDs are assigned, every process receives exactly one ID that is called the primary authorization ID. All other IDs are secondary authorization IDs. Furthermore, one ID (either primary or secondary) is designated as the current SQL ID. You can change the value of the SQL ID during your session. Example of changing the SQL ID: Suppose that ALPHA is your primary authorization ID or one of your secondary authorization IDs. You can make it your current SQL ID by issuing the following SQL statement:
SET CURRENT SQLID = 'ALPHA';
If you issue the statement through the distributed data facility (DDF), ALPHA must be one of the IDs that are associated with your process at the location where the statement runs. Your primary ID can be translated before it is sent to a remote location. Secondary IDs are associated with your process at the remote location. The current SQL ID, however, is not translated. For more information about authorization IDs and remote locations, see Controlling requests from remote applications on page 238. An ID with SYSADM authority can set the current SQL ID to any string whose length is less than or equal to 8 bytes. | | | Authorization IDs that are sent to a connecting server must conform to the security management product guidelines of the server. See the documentation of the security product for the connecting server.
134
Administration Guide
This section describes the explicit privileges that you can grant on objects. The descriptions of the explicit privileges are grouped by object and usage: v Collections in Table 30 on page 135 v Databases in Table 31 on page 135 v Packages in Table 32 on page 136 v Plans in Table 33 on page 136 v Routines in Table 34 on page 136 v Schemas in Table 35 on page 136 v Systems in Table 36 on page 136 v Tables and views in Table 37 on page 137 v Usage in Table 38 on page 138 v Use in Table 39 on page 138 Table 30 shows the collection privileges that DB2 allows.
Table 30. Explicit collection privileges Collection privilege CREATE IN Operations allowed for a named package collection The BIND PACKAGE subcommand, to name the collection
#
LOAD RECOVERDB
REORG REPAIR
STARTDB
# | # # # # #
STATS
STOPDB
135
DROPIN
136
Administration Guide
Table 36. Explicit system privileges (continued) System privilege BINDADD BINDAGENT Operations allowed on the system The BIND subcommand with the ADD option, to create new plans and packages The BIND, REBIND, and FREE subcommands, and the DROP PACKAGE statement, to bind, rebind, or free a plan or package, or copy a package, on behalf of the grantor. The BINDAGENT privilege is intended for separation of function, not for added security. A bind agent with the EXECUTE privilege might be able to gain all the authority of the grantor of BINDAGENT. The RECOVER BSDS command, to recover the bootstrap data set The CREATE ALIAS statement, to create an alias for a table or view name The CREATE DATABASE statement, to create a database and have DBADM authority over it The CREATE DATABASE statement, to create a database and have DBCTRL authority over it The CREATE STOGROUP statement, to create a storage group The CREATE GLOBAL TEMPORARY TABLE statement, to define a created temporary table The DISPLAY ARCHIVE, DISPLAY BUFFERPOOL, DISPLAY DATABASE, DISPLAY LOCATION, DISPLAY LOG, DISPLAY THREAD, and DISPLAY TRACE commands, to display system information Receive trace data that is not potentially sensitive Receive all trace data The RECOVER INDOUBT command, to recover threads The STOP DB2 command, to stop DB2 The STOSPACE utility, to obtain data about space usage The START TRACE, STOP TRACE, and MODIFY TRACE commands, to control tracing
Table 37 shows the table and view privileges that DB2 allows.
Table 37. Explicit table and view privileges Table or view privilege ALTER DELETE INDEX INSERT REFERENCES SQL statements allowed for a named table or view ALTER TABLE, to change the table definition DELETE, to delete rows CREATE INDEX, to create an index on the table INSERT, to insert rows ALTER or CREATE TABLE, to add or remove a referential constraint referring to the named table or to a list of columns in the table SELECT, to retrieve data from the table CREATE TRIGGER, to define a trigger on a table
SELECT TRIGGER
137
Table 37. Explicit table and view privileges (continued) Table or view privilege UPDATE GRANT ALL SQL statements allowed for a named table or view UPDATE, to update all columns or a specific list of columns SQL statements of all table privileges
Privileges needed for statements, commands, and utility jobs: For lists of all privileges and authorities that let you perform the following actions, consult the appropriate resource: v To execute a particular SQL statement, see the description of the statement in Chapter 5 of DB2 SQL Reference. v To issue a particular DB2 command, see the description of the command in Chapter 2 of DB2 Command Reference. v To run a particular type of utility job, see the description of the utility in DB2 Utility Guide and Reference.
Administrative authorities
Figure 9 on page 139 shows how privileges are grouped into authorities and how the authorities form a branched hierarchy. Table 41 on page 140 supplements Figure 9 on page 139 and includes the capabilities of each authority.
138
Administration Guide
Figure 9. Individual privileges of administrative authorities. Each authority includes the privileges in its box and all of the privileges of each authority in the boxes that are beneath it. Installation SYSOPR authority is an exception; it can do some things that SYSADM and SYSCTRL cannot do.
Table 41 on page 140 shows DB2 authorities and the actions that they are allowed to perform.
139
Table 41. DB2 authorities Authority SYSOPR Description System operator: v Can issue most DB2 commands v Cannot issue ARCHIVE LOG, START DATABASE, STOP DATABASE, and RECOVER BSDS v Can terminate any utility job v Can run the DSN1SDMP utility If held with the GRANT option, SYSOPR can grant these privileges to others. Installation SYSOPR One or two IDs are assigned this authority when DB2 is installed. They have all the privileges of SYSOPR, plus: v The authority is not recorded in the DB2 catalog. Therefore, the catalog does not need to be available to check the Installation SYSOPR authority. v No ID can revoke the authority; it can be removed only by changing the module that contains the subsystem initialization parameters (typically DSNZPARM). IDs with Installation SYSOPR authority can also: v Access DB2 when the subsystem is started with ACCESS(MAINT). v Run all allowable utilities on the directory and catalog databases (DSNDB01 and DSNDB06). v Run the REPAIR utility with the DBD statement. v Start and stop the database that contains the application registration table (ART) and the object registration table (ORT). For more information on these tables, see Chapter 10, Controlling access through a closed application, on page 215. v Issue dynamic SQL statements that are not controlled by the DB2 governor. v Issue a START DATABASE command to recover objects that have LPL entries or group buffer pool recovery-pending status. These IDs cannot change the access mode. PACKADM Package administrator: v Has all package privileges on all packages in specific collections v Has the CREATE IN privilege on those specific collections If PACKADM authority is held with the GRANT option, PACKADM can grant those privileges to others. If the installation option BIND NEW PACKAGE is BIND, PACKADM also has the privilege to add new packages or new versions of existing packages. DBMAINT Database maintenance authority allows privileges over a specific database to an ID. DBMAINT can perform the following actions within a specific database: v Create objects v Run utilities that dont change data v Issue commands v Terminate all utilities on the database except DIAGNOSE, REPORT, and STOSPACE If held with the GRANT option, DBMAINT can grant those privileges to others.
140
Administration Guide
Table 41. DB2 authorities (continued) Authority DBCTRL Description Database controller authority includes DBMAINT privileges over a specific database. Additionally, DBCTRL has database privileges to run utilities that can change the data. If the value of field DBADM CREATE AUTH on installation panel DSNTIPP was set to YES during DB2 installation, the ID with DBCTRL authority can create an alias for another user ID on any table in the database. If held with the GRANT option, DBCTRL can grant those privileges to others. DBADM Database administrator authority includes DBCTRL privileges over a specific database. Additionally, DBADM has privileges to access any tables in a specific database by using SQL statements. DBADM can also perform the following actions: v Drop or alter any table space, table, or index in the database v Issue a COMMENT, LABEL, or LOCK TABLE statement for any table in the database v Issue a COMMENT statement for any index in the database If the value of field DBADM CREATE AUTH on installation panel DSNTIPP was set to YES during DB2 installation, an ID with DBADM authority can create the following objects: v A view for another ID. The view must be based on at least one table, and that table must be in the database under DBADM authority. For more information about creating views, see the description of the CREATE VIEW statement in DB2 SQL Reference. v An alias for another ID on any table in the database. An ID with DBADM authority on one database can create a view on tables and views in that database and other databases only if the ID has all the privileges that are required to create the view. For example, an ID with DBADM authority cannot create a view on a view that is owned by another ID. If held with the GRANT option, DBADM can grant these privileges to others.
141
Table 41. DB2 authorities (continued) Authority SYSCTRL Description The system controller authority is designed for administering a system that contains sensitive data. The system controller has nearly complete control of the DB2 subsystem. However, the system controller cannot access user data directly unless the privilege to do so is explicitly granted. SYSCTRL can: v Act as installation SYSOPR (when the catalog is available) or DBCTRL over any database v Run any allowable utility on any database v Issue a COMMENT ON, LABEL ON, or LOCK TABLE statement for any table v Create a view on any catalog table for itself or for other IDs v Create tables and aliases for itself or for others IDs v Bind a new plan or package and name any ID as the owner of the plan or package Without additional privileges, SYSCTRL cannot: v Execute DML statements on user tables or views v Run plans or packages v Set the current SQL ID to a value that is not one of its primary or secondary IDs v Start or stop the database that contains the ART and the ORT v Act fully as SYSADM or as DBADM over any database v Access DB2 when the subsystem is started with ACCESS(MAINT) SYSCTRL authority is intended to separate system control functions from administrative functions. However, SYSCTRL is not a complete solution for a high-security system. If any plans have their EXECUTE privilege granted to PUBLIC, an ID with SYSCTRL authority can grant itself SYSADM authority. The only control over such actions is to audit the activity of IDs with high levels of authority. SYSADM System administrator authority includes all SYSCTRL, PACKADM, and DBADM privileges, including access to all data. SYSADM can perform the following actions and grant other IDs the privilege to perform these actions: v Use all the privileges of DBADM over any database v Use EXECUTE and BIND on any plan or package and COPY on any package v Use privileges over views that are owned by others v Set the current SQL ID to any valid value v Create and drop synonyms and views for other IDs on any table v Use any valid value for OWNER in BIND or REBIND v Drop database DSNDB07 SYSADM can perform the following actions, but cannot grant other IDs the privilege to perform the following actions: v Drop or alter any DB2 object, except system databases v Issue a COMMENT ON or LABEL ON statement for any table or view v Terminate any utility job Although SYSADM cannot grant the preceding privileges explicitly, SYSADM can grant these privileges to other IDs by granting them SYSADM authority.
142
Administration Guide
Table 41. DB2 authorities (continued) Authority Installation SYSADM Description One or two IDs are assigned installation SYSADM authority when DB2 is installed. They have all the privileges of SYSADM, plus: v DB2 does not record the authority in the catalog. Therefore, the catalog does not need to be available to check installation SYSADM authority. (The authority outside of the catalog is crucial. For example, if the directory table space DBD01 or the catalog table space SYSDBAUT is stopped, DB2 might not be able to check the authority to start it again. Only an installation SYSADM can start it.) v No ID can revoke installation SYSADM authority. You can remove the authority only by changing the module that contains the subsystem initialization parameters (typically DSNZPARM). IDs with installation SYSADM authority can also perform the following actions: v Run the CATMAINT utility v Access DB2 when the subsystem is started with ACCESS(MAINT) v Start databases DSNDB01 and DSNDB06 when they are stopped or in restricted status v Run the DIAGNOSE utility with the WAIT statement v Start and stop the database that contains the ART and the ORT
| | | |
Then grant the SELECT privilege on the view SALARIES to MATH110 with the following statement:
GRANT SELECT ON SALARIES TO MATH110;
After you grant the privilege, MATH110 can execute SELECT statements on the restricted set of data only.
143
Alternatively, you can give an ID access to only a specific combination of data by using multilevel security with row-level granularity. For information about multilevel security with row-level granularity, see Multilevel security on page 192.
Utilities LOAD
1
REPAIR DBD CHECK DATA CHECK LOB REORG TABLESPACE STOSPACE REBUILD INDEX RECOVER REORG INDEX REPAIR REPORT CHECK INDEX COPY MERGECOPY MODIFY QUIESCE RUNSTATS
Note: LOAD can be used to add lines to SYSIBM.SYSSTRINGS. LOAD cannot be run on other DSNDB01 or DSNDB06 tables.
144
Administration Guide
145
The owner of a JAR (Java class for a routine) that is used by a stored procedure or a user-defined function is the current SQL ID of the process that performs the INSTALL_JAR function. For information about installing a JAR, see DB2 Application Programming Guide and Reference for Java.
146
Administration Guide
Table 43. Implicit privileges of ownership by object type Object type Alias Database Implicit privileges of ownership To drop the alias DBCTRL or DBADM authority over the database, depending on the privilege (CREATEDBC or CREATEDBA) that is used to create it. DBCTRL authority does not include the privilege to access data in tables in the database. To use or drop a distinct type To alter, comment on, or drop the index To replace, use, or drop the JAR To bind, rebind, free, copy, execute, or drop the package To bind, rebind, free, or execute the plan To alter, comment on, use, or drop the sequence To alter or drop the group and to name it in the USING clause of a CREATE INDEX or CREATE TABLESPACE statement To execute, alter, drop, start, stop, or display a stored procedure To use or drop the synonym v v v v v v v v To To To To To To To To alter or drop the table or any indexes on it lock the table, comment on it, or label it create an index or view for the table select or update any row or column insert or delete any row use the LOAD utility for the table define referential constraints on any table or set of columns create a trigger on the table
Distinct type Index JAR (Java class for a routine) Package Plan
To alter or drop the table space and to name it in the IN clause of a CREATE TABLE statement To execute, alter, drop, start, stop, or display a user-defined function To drop, comment on, or label the view, or to select any row or column
| |
Implicit privileges do not apply to multilevel security. For information about multilevel security, see Multilevel security on page 192.
Changing ownership
The privileges that are implicit in ownership cannot be revoked. Therefore, you cannot replace an objects owner while the object exists. If you want to change ownership of an object, follow these steps:
Chapter 9. Controlling access to DB2 objects
147
1. Drop the object, which usually deletes all privileges on it2. 2. Re-create the object with a new owner. Exception: You can change package or plan ownership while a package or plan exists. For more information about changing package or plan ownership, see Establishing or changing ownership of a plan or a package. You might want to share ownership privileges on an object instead of replacing an old owner with a new owner. If so, make the owning ID a secondary ID to which several primary IDs are connected. You can change the list of primary IDs connected to the secondary ID without dropping and re-creating the object.
The statement puts the data for employee number 000010 into the host structure EMPREC. The data comes from table DSN8810.EMP, but the ID does not have unlimited access to DSN8810.EMP. Instead, the ID that has EXECUTE privilege for this plan can access rows in the DSN8810.EMP table only when EMPNO = '000010'. If any of the privileges that are required by the plan or package are revoked from the owner, the plan or the package is invalidated. The plan or package must be rebound, and the new owner must have the required privileges.
2. Dropping a package does not delete all privileges on it if another version of the package still remains in the catalog.
148
Administration Guide
v On BIND, your primary ID becomes the owner. v On REBIND, the previous owner retains ownership. Some systems that can bind a package at a DB2 system do not support the OWNER option. When the OWNER option is not supported, the primary authorization ID is always the owner of the package because a secondary ID cannot be named as the owner.
Authorization to execute
The plan or package owner must have authorization to execute all static SQL statements that are embedded in the plan or package. However, you do not need to have the authorization when the plan or package is bound. The objects to which the plan or package refers do not even need to exist at bind time. A bind operation always checks whether a local object exists and whether the owner has the required privileges on it. Any failure results in a message. However, you can choose whether the failure prevents the bind operation from completing by using the VALIDATE option on the BIND PLAN and BIND PACKAGE commands. The following values for the VALIDATE option determine how DB2 is to handle existence and authorization errors:
149
RUN
If you choose RUN for the VALIDATE option, the bind succeeds even when existence or authorization errors exist. DB2 checks existence and authorization at run time. If you choose BIND for the VALIDATE option, the bind fails when existence or authorization errors exist. Exception: If you use the SQLERROR(CONTINUE) option on the BIND PACKAGE command, the bind succeeds, but the packages SQL statements that have errors cannot execute.
BIND
The corresponding existence and authorization checks for remote objects are always made at run time. Authorization to execute dynamic SQL statements is also checked at run time. Table 46 on page 161 shows which IDs can supply the authorizations that are required for different types of statements. Applications that use the Resource Recovery Services attachment facility (RRSAF) to connect to DB2 do not require a plan. If the requesting application is an RRSAF application, DB2 follows the rules described in Checking authorization to execute an RRSAF application without a plan on page 151 to check authorizations.
In the figure, a remote requester, either a DB2 UDB for z/OS or some other requesting system, runs a package at the DB2 server. A statement in the package uses an alias or a three-part name to request services from a second DB2 UDB for z/OS server. The ID that is checked for the privileges that are needed to run at the second server can be: v The owner of the plan that is running at the requester (if the requester is DB2 UDB for z/OS or OS/390) v The owner of the package that is running at the DB2 server v The authorization ID of the process that runs the package at the first DB2 server (the process runner) In addition, if a remote alias is used in the SQL, the alias must be defined at the requester site. The ID that is used depends on these four factors: v Whether the requester is DB2 UDB for z/OS or OS/390, or a different system. v The value of the bind option DYNAMICRULES. See Authorization for dynamic SQL statements on page 164 for detailed information about the DYNAMICRULES options.
150
Administration Guide
v Whether the parameter HOPAUTH at the DB2 server site was set to BOTH or RUNNER when the installation job DSNTIJUZ was run. The default value is BOTH. v Whether the statement that is executed at the second server is static or dynamic SQL. Hop situation with non-DB2 UDB for z/OS or OS/390 server: Using DBPROTOCOL(DRDA), a three-part name statement can hop to a server other than DB2 UDB for z/OS or OS/390. In this hop situation, only package authorization information is passed to the second server. A hop is not allowed on a connection that matches the LUWID of another existing DRDA thread. For example, in a hop situation from site A to site B to site C to site A, a hop is not allowed to site A again. Table 44 shows how these factors determine the ID that must hold the required privileges when bind option DBPROTOCOL (PRIVATE) is in effect.
Table 44. The authorization ID that must hold required privileges for the double-hop situation Requester DB2 UDB for z/OS DYNAMICRULES HOPAUTH Statement Static Dynamic Either Static Dynamic Either Either Authorization ID Plan owner Process runner Plan owner Package owner Process runner Process runner Package owner
n/a
1
YES (default) NO
Bind behavior
1
n/a
Note: If DYNAMICRULES define behavior is in effect, DB2 converts to DYNAMICRULES bind behavior. If DYNAMICRULES invoke behavior is in effect, DB2 converts to DYNAMICRULES run behavior.
151
performance, especially when IDs are reused frequently. One cache exists for each plan, one global cache exists for packages, and a global cache exists for routines. The global cache for packages and routines are allocated at DB2 startup. For a data sharing group, each member does its own authorization caching. Caching IDs for plans: Authorization checking is fastest when the EXECUTE privilege is granted to PUBLIC and, after that, when the plan is reused by an ID that already appears in the cache. | | | | You can set the size of the plan authorization cache by using the BIND PLAN subcommand. For suggestions on setting this cache size, see Part 5 of DB2 Application Programming and SQL Guide. The default cache size is specified by an installation option, with an initial default setting of 3072 bytes. Caching IDs for packages: This performance enhancement provides a run-time benefit for: v Stored procedures v Remotely bound packages v Local packages in a package list in which the plan owner does not have execute authority on the package at bind time, but does at run time v Local packages that are not explicitly listed in a package list, but are implicitly listed by collection-id.*, *.*, or *.package-id You can set the size of the package authorization cache using the PACKAGE AUTH CACHE field on installation panel DSNTIPP. The default value, 100 KB, is enough storage to support about 690 collection-id.package-id entries or collection-id.* entries. You can cache more package authorization information by using any of the following strategies: v Granting package execute authority to collection.* v Granting package execute authority to PUBLIC for some packages or collections v Increasing the size of the cache The QTPACAUT field in the package accounting trace indicates how often DB2 succeeds at reading package authorization information from the cache. Caching IDs for routines: The routine authorization cache stores authorization IDs with the EXECUTE privilege on a specific routine. A routine is identified as schema.routine-name.type, where the routine name is one of the following names: v The specific function name for user-defined functions v The procedure name for stored procedures v * for all routines in the schema You can set the size of the routine authorization cache by using the ROUTINE AUTH CACHE field on installation panel DSNTIPP. The initial default setting of 100 KB is enough storage to support about 690schema.routine.type or schema.*.type entries. You can cache more authorization information about routines by using the following strategies: v Granting EXECUTE on schema.* v Granting routine execute authority to PUBLIC for some or all routines in the schema
152
Administration Guide
v Increasing the size of the cache | | | | | | | | | Caching and multilevel security: Caching is used with multilevel security with row-level granularity to improve performance. DB2 caches all security labels that are checked (successfully and unsuccessfully) during processing. At commit or rollback, the security labels are removed from the cache. If a security policy that employs multilevel security with row-level granularity requires an immediate change and long-running applications have not committed or rolled back, you might need to cancel the application. For more information about multilevel security with row-level granularity, see Working with data in a multilevel-secure environment on page 199.
153
enforced in IMS and CICS. DB2 provides a consistency check to avoid accidental mismatches between program and plan, but the consistency check is not a security check. The ENABLE and DISABLE options: You can limit the use of plans and packages by using the ENABLE and DISABLE options on the BIND and REBIND subcommands. Example: The ENABLE IMS option allows the plan or package to run from any IMS connection. Unless other systems are also named, ENABLE IMS does not allow the plan or package to run from any other type of connection. Example: DISABLE BATCH prevents a plan or package from running through a batch job, but it allows the plan or package to run from all other types of connection. You can exercise even finer control with the ENABLE and DISABLE options. You can enable or disable particular IMS connection names, CICS application IDs, requesting locations, and so forth. For details, see the syntax of the BIND and REBIND subcommands in DB2 Command Reference.
154
Administration Guide
Table 45. Common tasks and required privileges for routines Role Implementer Tasks Required privileges
If SQL is in the routine: codes, precompiles, If binding a package, BINDADD system compiles, and link-edits the program to use privilege and CREATE IN on the as the routine. Binds the program as the collection. routine package. If no SQL is in the routine: codes, compiles, and link-edits the program.
Definer
Issues a CREATE FUNCTION statement to define a user-defined function or CREATE PROCEDURE statement to define a stored procedure.
CREATEIN privilege on the schema. EXECUTE authority on the routine package when invoked.
Invoker
The routine implementer typically codes the routine in a program and precompiles the program. If the program contains SQL statements, the implementer binds the DBRM. In general, the authorization ID that binds the DBRM into a package is the package owner. The implementer is the routine package owner. As package owner, the implementer implicitly has EXECUTE authority on the package and has the authority to grant EXECUTE privileges to other users to execute the code within the package. The implementer grants EXECUTE authority on the routine package to the definer. EXECUTE authority is necessary only if the package contains SQL. For user-defined functions, the definer requires EXECUTE authority on the package. For stored procedures, the EXECUTE privilege on the package is checked for the definer and other IDs. For information about these additional IDs, see the CALL statement in DB2 SQL Reference. The definer is the routine owner. The definer issues a CREATE FUNCTION statement to define a user-defined function or a CREATE PROCEDURE statement to define a stored procedure. The definer of a routine is determined as follows: v If the SQL statement is embedded in an application program, the definer is the authorization ID of the owner of the plan or package. v If the SQL statement is dynamically prepared, the definer is the SQL authorization ID that is contained in the CURRENT SQLID special register. The definer grants EXECUTE authority on the routine to the invoker, that is, any user ID that needs to invoke the routine. The invoker invokes the routine from an SQL statement in the invoking plan or package. The invoker for a routine is determined as follows: v For a static statement, the invoker is the authorization ID of the plan or package owner. v For a dynamic statement, the invoker depends on DYNAMICRULES behavior. See Authorization for dynamic SQL statements on page 164 for a description of the options. See DB2 SQL Reference for more information about the CREATE FUNCTION and CREATE PROCEDURE statements.
155
Finally, you can use the following statement to let Alan view or update the appropriate SYSROUTINE_SRC and SYSROUTINE_OPTS rows:
GRANT SELECT, INSERT, DELETE, UPDATE ON (A1.B1GRSRC,A1.B1GROPTS) TO A1;
156
Administration Guide
After a set of generated routines goes into production, you can decide to regain control over the routine definitions in SYSROUTINES_SRC and SYSROUTINES_OPTS by revoking the INSERT, DELETE, and UPDATE privileges on the appropriate views. You can allow programmers to keep the SELECT privilege on their views, so that they can use the old rows for reference when they define new generated routines. #
157
/********************************************************************** * This routine accepts an employee serial number and a percent raise. * * If the employee is a manager, the raise is not applied. Otherwise, * * the new salary is computed, truncated if it exceeds the employee's * * manager's salary, and then applied to the database. * **********************************************************************/ void C_SALARY /* main routine */ ( char *employeeSerial /* in: employee serial no. */ decimal *percentRaise /* in: percentage raise */ decimal *newSalary, /* out: employee's new salary */ short int *niEmployeeSerial /* in: indic var, empl ser */ short int *niPercentRaise /* in: indic var, % raise */ short int *niNewSalary, /* out: indic var, new salary */ char *sqlstate, /* out: SQLSTATE */ char *fnName, /* in: family name of function*/ char *specificName, /* in: specific name of func */ char *message /* out: diagnostic message */ ) { EXEC SQL BEGIN DECLARE SECTION; char hvEMPNO-7-; /* host var for empl serial */ decimal hvSALARY; /* host var for empl salary */ char hvWORKDEPT-3-; /* host var for empl dept no. */ decimal hvManagerSalary; /* host var, emp's mgr's salry*/ EXEC SQL END DECLARE SECTION; sqlstate = 0; memset( message,0,70 ); /******************************************************************* * Copy the employee's serial into a host variable * *******************************************************************/ strcpy( hvEMPNO,employeeSerial ); /******************************************************************* * Get the employee's work department and current salary * *******************************************************************/ EXEC SQL SELECT WORKDEPT, SALARY INTO :hvWORKDEPT, :hvSALARY FROM EMP WHERE EMPNO = :hvEMPNO; /******************************************************************* * See if the employee is a manager * *******************************************************************/ EXEC SQL SELECT DEPTNO INTO :hvWORKDEPT FROM DEPT WHERE MGRNO = :hvEMPNO; /******************************************************************* * If the employee is a manager, do not apply the raise * *******************************************************************/ if( SQLCODE == 0 ) { newSalary = hvSALARY; } Figure 11. Example of a user-defined function (Part 1 of 2)
158
Administration Guide
/******************************************************************* * Otherwise, compute and apply the raise such that it does not * * exceed the employee's manager's salary * *******************************************************************/ else { /*************************************************************** * Get the employee's manager's salary * ***************************************************************/ EXEC SQL SELECT SALARY INTO :hvManagerSalary FROM EMP WHERE EMPNO = (SELECT MGRNO FROM DSN8610.DEPT WHERE DEPTNO = :hvWORKDEPT); /*************************************************************** * Compute proposed raise for the employee * ***************************************************************/ newSalary = hvSALARY * (1 + percentRaise/100); /*************************************************************** * Don't let the proposed raise exceed the manager's salary * ***************************************************************/ if( newSalary > hvManagerSalary newSalary = hvManagerSalary; /*************************************************************** * Apply the raise * ***************************************************************/ hvSALARY = newSalary; EXEC SQL UPDATE EMP SET SALARY = :hvSALARY WHERE EMPNO = :hvEMPNO; } return; } /* end C_SALARY */ Figure 11. Example of a user-defined function (Part 2 of 2)
The implementer requires the UPDATE privilege on table EMP. Users with the EXECUTE privilege on function C_SALARY do not need the UPDATE privilege on the table. 2. Because this program contains SQL, the implementer performs the following steps: a. Precompile the program that implements the user-defined function. b. Link-edit the user-defined function with DSNRLI (RRS attachment facility), and name the programs load module C_SALARY. c. Bind the DBRM into package MYCOLLID.C_SALARY. After performing these steps, the implementer is the function package owner. 3. The implementer then grants EXECUTE privilege on the user-defined function package to the definer.
GRANT EXECUTE ON PACKAGE MYCOLLID.C_SALARY TO definer
As package owner, the implementer can grant execute privileges to other users, which allows those users to execute code within the package. For example:
GRANT EXECUTE ON PACKAGE MYCOLLID.C_SALARY TO other_user
159
1. The definer executes the following CREATE FUNCTION statement to define the user-defined function salary_change to DB2:
CREATE FUNCTION SALARY_CHANGE( VARCHAR( 6 ) DECIMAL( 5,2 ) ) RETURNS DECIMAL( 9,2 ) SPECIFIC schema.SALCHANGE LANGUAGE C DETERMINISTIC MODIFIES SQL DATA EXTERNAL NAME C_SALARY PARAMETER STYLE DB2SQL RETURNS NULL ON NULL CALL NO EXTERNAL ACTION NO SCRATCHPAD NO FINAL CALL ALLOW PARALLEL NO COLLID ASUTIME LIMIT 1 STAY RESIDENT NO PROGRAM TYPE SUB WLM ENVIRONMENT WLMENV SECURITY DB2 NO DBINFO;
After executing the CREATE FUNCTION statement, the definer owns the user-defined function. The definer can execute the user-defined function package because the user-defined function package owner, in this case the implementer, granted to the definer the EXECUTE privilege on the package that contains the user-defined function. 2. The definer then grants the EXECUTE privilege on SALARY_CHANGE to all function invokers.
GRANT EXECUTE ON FUNCTION SALARY_CHANGE TO invoker1, invoker2, invoker3, invoker4
2. The invoker then precompiles, compiles, link-edits, and binds the invoking application's DBRM into the invoking package or invoking plan. An invoking package or invoking plan is the package or plan that contains the SQL that invokes the user-defined function. After performing these steps, the invoker is the owner of the invoking plan or package. Restriction: The invoker must hold the SELECT privilege on the table EMP and the EXECUTE privilege on the function SALARY_CHANGE.
160
Administration Guide
# #
161
Table 46. Required privileges for basic operations on dynamic SQL statements (continued) Operation REVOKE ID Current SQL ID Required privileges Must either have granted the privilege that is being revoked, or hold SYSCTRL or SYSADM authority. Applicable table, database, or schema privilege Applicable table or database privilege. If the current SQL ID has SYSADM authority, the qualifier can be any ID at all, and need not have any privilege. As required by the statement; see Composite privileges on page 171. Unqualified object names are qualified by the value of the special register CURRENT SQLID. See Authorization for dynamic SQL statements on page 164. As required by the statement; see Composite privileges on page 171. DYNAMICRULES behavior determines how unqualified object names are qualified; see Authorization for dynamic SQL statements on page 164.
Current SQL ID
ID named as owner
Other dynamic SQL if All primary IDs and DYNAMICRULES secondary IDs and the uses run behavior current SQL ID together
Other dynamic SQL if Plan or package DYNAMICRULES owner uses bind behavior
Other dynamic SQL if Function or procedure As required by the statement; see owner Composite privileges on page 171. DYNAMICRULES DYNAMICRULES behavior determines how uses define behavior unqualified object names are qualified; see Authorization for dynamic SQL statements on page 164. Other dynamic SQL if ID of the SQL statement that DYNAMICRULES uses invoke behavior invoked the function or procedure As required by the statement; see Composite privileges on page 171. DYNAMICRULES behavior determines how unqualified object names are qualified; see Authorization for dynamic SQL statements on page 164.
Table 47. Required privileges for basic operations on plans and packages Operation Execute a plan ID Primary ID or any secondary ID Required privileges Any of the following privileges: v Ownership of the plan v EXECUTE privilege for the plan v SYSADM authority Any of the following privileges: v Applicable privileges required by the statements v Authorities that include the privileges v Ownership that implicitly includes the privileges Object names include the value of QUALIFIER, where it applies.
162
Administration Guide
Table 47. Required privileges for basic operations on plans and packages (continued) Operation Include package in PKLIST1 ID Plan owner Required privileges Any of the following privileges: v Ownership of the package v EXECUTE privilege for the package v PACKADM authority over the package collection v SYSADM authority BINDADD privilege, or SYSCTRL or SYSADM authority
BIND a new plan using the default owner or primary authorization ID BIND a new package using the default owner or primary authorization ID
Primary ID
Primary ID
If the value of the field BIND NEW PACKAGE on installation panel DSNTIPP is BIND, any of the following privileges: v BINDADD privilege and CREATE IN privilege for the collection v PACKADM authority for the collection v SYSADM or SYSCTRL authority If BIND NEW PACKAGE is BINDADD, any of the following privileges: v BINDADD privilege and either the CREATE IN or PACKADM privilege for the collection v SYSADM or SYSCTRL authority
Primary ID or any BIND REPLACE or REBIND for a plan or secondary ID package using the default owner or primary authorization ID
Any of the following privileges: v Ownership of the plan or package v BIND privilege for the plan or package v BINDAGENT from the plan or package owner v PACKADM authority for the collection (for a package only) v SYSADM or SYSCTRL authority See also Multiple actions in one statement on page 171.
Primary ID
If BIND NEW PACKAGE is BIND, any of the following privileges: v BIND privilege on the package or collection v BINDADD privilege and CREATE IN privilege for the collection v PACKADM authority for the collection v SYSADM or SYSCTRL authority If BIND NEW PACKAGE is BINDADD, any of the following: v BINDADD privilege and either the CREATE IN or PACKADM privilege for the collection v SYSADM or SYSCTRL authority
Any of the following privileges: v Ownership of the package v BINDAGENT from the package owner v PACKADM authority for the collection v SYSADM or SYSCTRL authority
163
Table 47. Required privileges for basic operations on plans and packages (continued) Operation COPY a package ID Primary ID or any secondary ID Required privileges Any of the following: v Ownership of the package v COPY privilege for the package v BINDAGENT from the package owner v PACKADM authority for the collection v SYSADM or SYSCTRL authority Any of the following privileges: v Ownership of the plan v BIND privilege for the plan v BINDAGENT from the plan owner v SYSADM or SYSCTRL authority Any of the following privileges: v New owner is the primary or any secondary ID v BINDAGENT from the new owner v SYSADM or SYSCTRL authority
FREE a plan
Name a new OWNER Primary ID or any other than the secondary ID primary authorization ID for any bind operation Notes:
1. A user-defined function, stored procedure, or trigger package does not need to be included in a package list. 2. A trigger package cannot be deleted by FREE PACKAGE or DROP PACKAGE. The DROP TRIGGER statement must be used to delete the trigger package.
164
Administration Guide
The combination of the DYNAMICRULES value and the run-time environment determine the values for the dynamic SQL attributes. Those attribute values are called the dynamic SQL statement behaviors. The four behaviors are: v Run behavior v Bind behavior v Define behavior v Invoke behavior on page 166 The behaviors are summarized in Common attribute values for bind, define, and invoke behavior on page 166.
Run behavior
DB2 processes dynamic SQL statements using the standard attribute values for dynamic SQL statements. These attributes are collectively called run behavior and consist of the following attributes: v DB2 uses the authorization ID of the application process and the current SQL ID to: Check for authorization of dynamic SQL statements Serve as the implicit qualifier of table, view, index, and alias names v Dynamic SQL statements use the values of application programming options that were specified during installation. The installation option USE FOR DYNAMICRULES has no effect. v GRANT, REVOKE, CREATE, ALTER, DROP, and RENAME statements can be executed dynamically.
Bind behavior
DB2 processes dynamic SQL statements using bind behavior. Bind behavior consists of the following attributes: v DB2 uses the authorization ID of the plan or package for authorization checking of dynamic SQL statements. v Unqualified table, view, index, and alias names in dynamic SQL statements are implicitly qualified with value of the bind option QUALIFIER; if you do not specify QUALIFIER, DB2 uses the authorization ID of the plan or package owner as the implicit qualifier. v Bind behavior consists of the attribute values that are described in Common attribute values for bind, define, and invoke behavior on page 166. The values of the authorization ID and the qualifier for unqualified objects are the same as those that are used for embedded or static SQL statements.
Define behavior
When the package is run as or under a stored procedure or user-defined function package, DB2 processes dynamic SQL statements using define behavior. Define behavior consists of the following attribute values: v DB2 uses the authorization ID of the user-defined function or the stored procedure owner for authorization checking of dynamic SQL statements in the application package. v The default qualifier for unqualified objects is the user-defined function or the stored procedure owner. v Define behavior consists of the attribute values that are described in Common attribute values for bind, define, and invoke behavior on page 166.
165
When the package is run as a stand-alone program, DB2 processes dynamic SQL statements using bind behavior or run behavior, depending on the DYNAMICRULES value specified.
Invoke behavior
When the package is run as or under a stored procedure or user-defined function package, DB2 processes dynamic SQL statements using invoke behavior. Invoke behavior consists of the following attribute values: v DB2 uses the authorization ID of the user-defined function or the stored procedure invoker for authorization checking of dynamic SQL statements in the application package. If the invoker is the primary authorization ID of the process or the current SQL ID, the following rules apply: The ID of the invoker is checked for the required authorization. Secondary authorization IDs are also checked if they are needed for the required authorization. v The default qualifier for unqualified objects is the user-defined function or the stored procedure invoker. v Invoke behavior consists of the attribute values that are described in Common attribute values for bind, define, and invoke behavior. When the package is run as a stand-alone program, DB2 processes dynamic SQL statements using bind behavior or run behavior, depending on the DYNAMICRULES specified value.
166
Administration Guide
Table 48. How DYNAMICRULES and the run-time environment determine dynamic SQL statement behavior (continued) Behavior of dynamic SQL statements DYNAMICRULES value DEFINEBIND DEFINERUN INVOKEBIND INVOKERUN Stand-alone program environment Bind behavior Run behavior Bind behavior Run behavior User-defined function or stored procedure environment Define behavior Define behavior Invoke behavior Invoke behavior
Note: The BIND and RUN values can be specified for packages and plans. The other values can be specified only for packages.
Table 49 shows the dynamic SQL attribute values for each type of dynamic SQL behavior.
Table 49. Definitions of dynamic SQL statement behaviors Setting for dynamic SQL attributes Dynamic SQL attribute Authorization ID Bind behavior Plan or package owner Bind OWNER or QUALIFIER value Not applicable Determined by DSNHDECP parameter DYNRULS 3 No Run behavior Current SQLID Define behavior User-defined function or stored procedure owner User-defined function or stored procedure owner Not applicable Determined by DSNHDECP parameter DYNRULS 3 No Invoke behavior Authorization ID of invoker 1 Authorization ID of invoker Not applicable Determined by DSNHDECP parameter DYNRULS 3 No
Current SQLID
Yes
1. If the invoker is the primary authorization ID of the process or the current SQL ID, the following rules apply: v The ID of the invoker is checked for the required authorization. v Secondary authorization IDs are also checked if they are needed for the required authorization. 2. DB2 uses the current SQL ID as the authorization ID for dynamic SQL statements only for plans and packages that have DYNAMICRULES run behavior. For the other dynamic SQL behaviors, DB2 uses the authorization ID that is associated with each dynamic SQL behavior, as shown in this table. The initial current SQL ID is independent of the dynamic SQL behavior. For stand-alone programs, the current SQL ID is initialized to the primary authorization ID. See DB2 Application Programming and SQL Guide for information about initialization of current SQL ID for user-defined functions and stored procedures. You can execute the SET CURRENT SQLID statement to change the current SQL ID for packages with any dynamic SQL behavior, but DB2 uses the current SQL ID only for plans and packages with run behavior. 3. The value of DSNHDECP parameter DYNRULS, which you specify in field USE FOR DYNAMICRULES in installation panel DSNTIPF, determines whether DB2 uses the precompiler options or the application programming defaults for dynamic SQL statements. See Part 5 of DB2 Application Programming and SQL Guide for more information.
167
Definer (owner): IDASP Stored procedure A Call B(...) Package owner: IDA DYNAMICRULES(...) Package AP Program C Call B(...)
Figure 12. Authorization for dynamic SQL statements in programs and routines
Stored procedure A was defined by IDASP and is therefore owned by IDASP. The stored procedure package AP was bound by IDA and is therefore owned by IDA. Package BP was bound by IDB and is therefore owned by IDB. The authorization ID under which EXEC SQL CALL A runs is IDD, the owner of plan DP. The authorization ID under which dynamic SQL statements in package AP run is determined in the following way: v If package AP uses DYNAMICRULES bind behavior, the authorization ID for dynamic SQL statements in package AP is IDA, the owner of package AP. v If package AP uses DYNAMICRULES run behavior, the authorization ID for dynamic SQL statements in package AP is the value of CURRENT SQLID when the statements execute. v If package AP uses DYNAMICRULES define behavior, the authorization ID for dynamic SQL statements in package AP is IDASP, the definer (owner) of stored procedure A.
168
Administration Guide
v If package AP uses DYNAMICRULES invoke behavior, the authorization ID for dynamic SQL statements in package AP is IDD, the invoker of stored procedure A. The authorization ID under which dynamic SQL statements in package BP run is determined in the following way: v If package BP uses DYNAMICRULES bind behavior, the authorization ID for dynamic SQL statements in package BP is IDB, the owner of package BP. v If package BP uses DYNAMICRULES run behavior, the authorization ID for dynamic SQL statements in package BP is the value of CURRENT SQLID when the statements execute. v If package BP uses DYNAMICRULES define behavior: When subroutine B is called by stored procedure A, the authorization ID for dynamic SQL statements in package BP is IDASP, the definer of stored procedure A. When subroutine B is called by program C: - If package BP uses the DYNAMICRULES option DEFINERUN, DB2 executes package BP using DYNAMICRULES run behavior, which means that the authorization ID for dynamic SQL statements in package BP is the value of CURRENT SQLID when the statements execute. - If package BP uses the DYNAMICRULES option DEFINEBIND, DB2 executes package BP using DYNAMICRULES bind behavior, which means that the authorization ID for dynamic SQL statements in package BP is IDB, the owner of package BP. v If package BP uses DYNAMICRULES invoke behavior: When subroutine B is called by stored procedure A, the authorization ID for dynamic SQL statements in package BP is IDD, the authorization ID under which EXEC SQL CALL A executed. When subroutine B is called by program C: - If package BP uses the DYNAMICRULES option INVOKERUN, DB2 executes package BP using DYNAMICRULES run behavior, which means that the authorization ID for dynamic SQL statements in package BP is the value of CURRENT SQLID when the statements execute. - If package BP uses the DYNAMICRULES option INVOKEBIND, DB2 executes package BP using DYNAMICRULES bind behavior, which means that the authorization ID for dynamic SQL statements in package BP is IDB, the owner of package BP. Now suppose that B is a user-defined function, as shown in Figure 13 on page 170.
169
Program C Definer (owner): IDASP Stored Procedure A EXEC SQL SELECT B(...)... (Authorization ID IDA) EXEC SQL SELECT B(...)... (Authorization ID IDC) Package owner: IDA DYNAMICRULES(...) Package AP Package owner: IDC
Package CP
Figure 13. Authorization for dynamic SQL statements in programs and nested routines
User-defined function B was defined by IDBUDF and is therefore owned by ID IDBUDF. Stored procedure A invokes user-defined function B under authorization ID IDA. Program C invokes user-defined function B under authorization ID IDC. In both cases, the invoking SQL statement (EXEC SQL SELECT B) is static. The authorization ID under which dynamic SQL statements in package BP run is determined in the following way: v If package BP uses DYNAMICRULES bind behavior, the authorization ID for dynamic SQL statements in package BP is IDB, the owner of package BP. v If package BP uses DYNAMICRULES run behavior, the authorization ID for dynamic SQL statements in package BP is the value of CURRENT SQLID when the statements execute. v If package BP uses DYNAMICRULES define behavior, the authorization ID for dynamic SQL statements in package BP is IDBUDF, the definer of user-defined function B. v If package BP uses DYNAMICRULES invoke behavior: When user-defined function B is invoked by stored procedure A, the authorization ID for dynamic SQL statements in package BP is IDA, the authorization ID under which B is invoked in stored procedure A. When user-defined function B is invoked by program C, the authorization ID for dynamic SQL statements in package BP is IDC, the owner of package CP, and is the authorization ID under which B is invoked in program C.
170
Administration Guide
Simplifying authorization
You can simplify authorization in several ways. However, you should ensure that you do not violate any of the authorization standards at your installation. Consider the following strategies to simplify authorization: v Have the implementer bind the user-defined function package using DYNAMICRULES define behavior. With this behavior in effect, DB2 only needs to check the definers ID to execute dynamic SQL statements in the routine. Otherwise, DB2 needs to check the many different IDs that invoke the user-defined function. v If you have many different routines, group those routines into schemas. Then grant EXECUTE on the routines in the schema to the appropriate users. Users have execute authority on any functions that you add to that schema. Example:To grant the EXECUTE privilege on a schema to PUBLIC, issue the following statement:
GRANT EXECUTE ON FUNCTION schemaname.* TO PUBLIC;
Composite privileges
An SQL statement can name more than one object. For example, a SELECT operation can join two or more tables, or an INSERT can use a subquery. Those operations require privileges on all of the tables that are involved in the statement. However, you might be able to issue such a statement dynamically even though one of your IDs alone does not have all the required privileges. If DYNAMICRULES run behavior is in effect when the dynamic statement is prepared and your primary ID or any of your secondary IDs have all the needed privileges, the statement is validated. However, if you embed the same statement in a host program and try to bind it into a plan or package, the validation fails. The validation also fails for the dynamic statement if DYNAMICRULES bind, define, or invoke behavior is in effect when you issue the dynamic statement. In each case, all the required privileges must be held by the single authorization ID, determined by DYNAMICRULES behavior.
P1 and P2 are successfully rebound, even though neither the FREDDY ID nor the REUBEN ID has the BIND privilege for both plans.
171
Table 50. Some common jobs, tasks, and required privileges Job title System Operator Tasks Issues commands to: v Start and stop DB2 v Control traces v Display databases and threads v Recover indoubt threads v Start, stop, and display routines Required privileges SYSOPR authority
Performs emergency backup, with access to SYSADM authority all data. Authorizes other users, for some or all levels below. Designs, creates, loads, reorganizes, and monitors databases, tables, and other objects. v Installs a DB2 subsystem. v Recovers the DB2 catalog. v Repairs data. SYSCTRL authority v DBADM authority over a database v Use of storage groups and buffer pools Installation SYSADM, which is assigned when DB2 is installed. (Consider securing the password for an ID with this authority so that the authority is available only when needed.) v BIND on existing plans or packages, or BINDADD v CREATE IN on some collections v Privileges on some objects v CREATETAB on some database, with a default table space provided
System Programmer
Application Programmer
v Develops and tests DB2 application programs. v Creates tables of test data.
Production Binder
Secondary ID or RACF group with BINDADD, CREATE IN privileges required by application packages and plans PACKADM authority v SELECT on the SYSTABLES, SYSCOLUMNS, and SYSVIEWS catalog tables v CREATETMTAB system privilege to create temporary tables
Manages collections and the packages in them, and delegates the responsibilities. Defines the data requirements for an application program, by examining the DB2 catalog.
Executes an application program. v Defines the data requirements for a query user. v Provides the data by creating tables and views, loading tables, and granting access.
EXECUTE for the application plan v DBADM authority over some databases v SELECT on the SYSTABLES, SYSCOLUMNS, and SYSVIEWS catalog tables
172
Administration Guide
Table 50. Some common jobs, tasks, and required privileges (continued) Job title Query User Tasks v Issues SQL statements to retrieve, add, or change data. v Saves results as tables or in global temporary tables. Required privileges v SELECT, INSERT, UPDATE, DELETE on some tables and views v CREATETAB, to create tables in other than the default database v CREATETMTAB system privilege to create temporary tables v SELECT on SYSTABLES, SYSCOLUMNS, or views thereof. QMF provides the views.
173
v Grant a list of privileges v Grant privileges over a list of objects v Grant ALL, for all the privileges of accessing a single table, or for all privileges that are associated with a specific package DB2 ignores duplicate grants and keeps only one record of a grant in the catalog. Example: Suppose that Susan grants the SELECT privilege on the EMP table to Ray. Then suppose that Susan grants the same privilege to Ray again, without revoking the first grant. When Susan issues the second grant, DB2 ignores it and maintains the record of the first grant in the catalog. The suppression of duplicate records applies not only to explicit grants, but also to the implicit grants of privileges that are made when a package is created. Granting privileges to remote users: A query that arrives at your local DB2 through the distributed data facility (DDF) is accompanied by an authorization ID. That ID can go through connection or sign-on processing when it arrives, it can be translated to another value, and it can be associated with secondary authorization IDs. (For the details of all these processes, see Controlling requests from remote applications on page 238.) As the end result of these processes, the remote query is associated with a set of IDs that is known to your local DB2 subsystem. You assign privileges to these IDs in the same way that you assign privileges to IDs that are associated with local queries. You can grant a table privilege to any ID anywhere that uses DB2 private protocol access to your data, by issuing the following command:
GRANT privilege TO PUBLIC AT ALL LOCATIONS;
You can grant SELECT, INSERT, UPDATE, and DELETE table privileges. If you grant a privilege to PUBLIC AT ALL LOCATIONS, the grantee is PUBLIC*. Because PUBLIC* is a special identifier that is used by DB2 internally, you should not use PUBLIC* as a primary or secondary authorization ID. When a privilege is revoked from PUBLIC AT ALL LOCATIONS, authorization IDs to which the privilege was specifically granted still retain the privilege. Some differences exist in the privileges for a query that uses system-directed access: v Although the query can use privileges granted TO PUBLIC AT ALL LOCATIONS, it cannot use privileges granted TO PUBLIC. v The query can exercise only the SELECT, INSERT, UPDATE, and DELETE privileges at the remote location. These restrictions do not apply to queries that are run by a package that is bound at your local DB2 subsystem. Those queries can use any privilege that is granted to their associated IDs or any privilege that is granted to PUBLIC. #
174
Administration Guide
Suppose that the Spiffy Computer Company wants to create a database to hold information that is usually posted on hallway bulletin boards. For example, the database might hold notices of upcoming holidays and bowling scores. Because the president of the Spiffy Computer Company is an excellent bowler, she wants everyone in the company to have access to her scores. To create and maintain the tables and programs that are needed for this application, Spiffy Computer Company develops the security plan shown in Figure 14.
System administrator ID: ADMIN Package administrator ID: PKA01 Database administrator ID: PKA01 Application programmers IDs: PGMR01, PGMR02 PGMR03 Production binder ID: BINDER Database controllers IDs: DBUTIL1, DBUTIL2
Figure 14. Security plan for the Spiffy Computer Company. Lines connect the grantor of a privilege or authority to the grantee.
Spiffy Computer Company's system of privileges and authorities associates each role with an authorization ID. For example, the System Administrator role has the ADMIN authorization ID.
The system administrator uses the ADMIN authorization ID, which has SYSADM authority, to create a storage group (SG1) and to issue the following statements: 1. GRANT PACKADM ON COLLECTION BOWLS TO PKA01 WITH GRANT OPTION; This statement grants to PKA01 the CREATE IN privilege on the collection BOWLS and BIND, EXECUTE, and COPY privileges on all packages in the collection. Because ADMIN used the WITH GRANT OPTION clause, PKA01 can grant those privileges to others. 2. GRANT CREATEDBA TO DBA01; This statement grants to DBA01 the privilege to create a database and to have DBADM authority over that database. 3. GRANT USE OF STOGROUP SG1 TO DBA01 WITH GRANT OPTION; This statement allows DBA01 to use storage group SG1 and to grant that privilege to others. 4. GRANT USE OF BUFFERPOOL BP0, BP1 TO DBA01 WITH GRANT OPTION; This statement allows DBA01 to use buffer pools BP0 and BP1 and to grant that privilege to others.
175
The package administrator, PKA01, controls the binding of packages into collections. PKA01 can use the CREATE IN privilege on the collection BOWLS and the BIND, EXECUTE, and COPY privileges on all packages in the collection. PKA01 also has the authority to grant these privileges to others.
The database administrator, DBA01, using the CREATEDBA privilege, creates the database DB1. When DBA01 creates DB1, DBA01 automatically has DBADM authority over the database.
The database administrator at Spiffy Computer Company wants help with running the COPY and RECOVER utilities. Therefore DBA01 grants DBCTRL authority over database DB1 to DBUTIL1 and DBUTIL2. To grant DBCTRL authority, the database administrator issues the following statement:
GRANT DBCTRL ON DATABASE DB1 TO DBUTIL1, DBUTIL2;
176
Administration Guide
those privileges are maintained in the system even after Joes individual ID is removed from the system. When Mary enters the system, she will have all of Joes privileges after her ID is associated with the functional ID DEPT4. v Functional IDs reduce the number of grants that are needed because functional IDs often represent groups of individuals. v Functional IDs reduce the need to revoke privileges and re-create objects when they change ownership. Example: Suppose that Bob changes jobs within the Spiffy Computer Company. Bobs individual ID has privileges on many objects in the system and owns three databases. When Bobs job changes, he no longer needs privileges over these objects or ownership of these databases. Because Bobs privileges are associated with his individual ID, a system administrator needs to revoke all of Bobs privileges on objects and drop and re-create Bobs databases with a new owner. If Bob received privileges by association with a functional ID, the system administrator would only need to remove Bobs association with the functional ID.
The database administrator, DBA01, owns database DB1 and has the privileges to use storage group SG1 and buffer pool BP0. The database administrator holds both of these privileges with the GRANT option. The database administrator issues the following statements: 1. GRANT CREATETAB, CREATETS ON DATABASE DB1 TO DEVGROUP; 2. GRANT USE OF STOGROUP SG1 TO DEVGROUP; 3. GRANT USE OF BUFFERPOOL BP0 TO DEVGROUP; Because the system and database administrators at Spiffy still need to control the use of those resources, the preceding statements are issued without the GRANT option. Three programmers in the Software Support department write and test a new program, PROGRAM1. Their IDs are PGMR01, PGMR02, and PGMR03. Each programmer needs to create test tables, use the SG1 storage group, and use one of the buffer pools. All of those resources are controlled by DEVGROUP, which is a RACF group ID. Therefore, granting privileges over those resources specifically to PGMR01, PGMR02, and PGMR03 is unnecessary. Each ID should be associated with the RACF group DEVGROUP and receive the privileges that are associated with that
Chapter 9. Controlling access to DB2 objects
177
functional ID. The following graphic shows the DEVGROUP and its members:
RACF group ID: DEVGROUP Group members: PGMR01, PGMR02, PGMR03
The security administrator connects as many members as desired to the group DEVGROUP. Each member can exercise all the privileges that are granted to the group ID. This example assumes that the installed connection and sign-on procedures allow secondary authorization IDs. For examples of RACF commands that connect IDs to RACF groups, and for a description of the connection and sign-on procedures, see Chapter 11, Controlling access to a DB2 subsystem, on page 231.
With that privilege, any member of the RACF group DEVGROUP can bind plans and packages that are to be owned by DEVGROUP. Any member of the group can rebind a plan or package that is owned by DEVGROUP. The following graphic shows the BINDADD privilege granted to the group:
RACF group ID: DEVGROUP Privilege: BINDADD
The Software Support department proceeds to create and test the program.
178
Administration Guide
RACF group ID: PRODCTN Privileges: BINDADD CREATE IN collection BOWLS Privileges on SQL objects referenced in application
BINDER can bind plans and packages for owner PRODCTN because it is a member of the RACF PRODCTN group. PACKADM, the package administrator for BOWLS, can grant the CREATE privilege with the following statement:
GRANT CREATE ON COLLECTION BOWLS TO PRODCTN;
With the plan in place, the database administrator at Spiffy wants to make the PROGRAM1 plan available to all employees by issuing the following statement:
GRANT EXECUTE ON PLAN PROGRAM1 TO PUBLIC;
More than one ID has the authority or privileges that are necessary to issue this statement. For example, ADMIN has SYSADM authority and can grant the EXECUTE privilege. Also, BINDER (or any other members of the PRODCTN group) can set CURRENT SQLID to PRODCTN, which owns PROGRAM1, and issue the statement. When EXECUTE is granted to PUBLIC, other IDs do not need any explicit authority on T1. Finally, the plan to display bowling scores at Spiffy Computer Company is complete. The production plan, PROGRAM1, is created, and all IDs have the authority to execute the plan.
PKA01, who has PACKADM authority, grants the required privileges to DEVGROUP by issuing this statement. 3. Bind the SQL statements in PROGRAM1 as a package. After the steps are completed, the package owner can issue the following statement:
GRANT EXECUTE ON PACKAGE PROGRAM1 TO PUBLIC;
Any system that is connected to the original DB2 location can then run PROGRAM1 and execute the package by using DRDA access. Restriction: If the remote system is another DB2, a plan must be bound there that includes the package in its package list.
179
The preceding steps represent a simplified scenario that focuses on granting appropriate privileges and authorities. In practice, you would also need to consider questions like these: v Is the performance of a remote query acceptable for this application? v If other DBMSs are not DB2 subsystems, will the non-SQL portions of PROGRAM1 run in their environments? #
An ID with SYSADM or SYSCTRL authority can revoke a privilege that has been granted by another ID with the following statement:
REVOKE authorization-specification FROM auth-id BY auth-id
The BY clause specifies the authorization ID that originally granted the privilege. If two or more grantors grant the same privilege to an ID, executing a single REVOKE statement does not remove the privilege. To remove it, each grant of the privilege must be revoked. The WITH GRANT OPTION clause of the GRANT statement allows an ID to pass the granted privilege to others. If the privilege is removed from the ID, its deletion can cascade to others, with side effects that are not immediately evident. For example, when a privilege is removed from authorization ID X, it is also removed from any ID to which X granted it, unless that ID also has the privilege from some other source. Example: Suppose that DBA01 grants DBCTRL authority with the GRANT option on database DB1 to DBUTIL1. Then DBUTIL1 grants the CREATETAB privilege on DB1 to PGMR01. If DBA01 revokes DBCTRL from DBUTIL1, PGMR01 loses the CREATETAB privilege. If PGMR01 also granted the CREATETAB privilege to OPER1 and OPER2, they also lose the privilege. Example: Suppose that PGMR01 from the preceding example created table T1 while holding the CREATETAB privilege. If PGMR01 loses the CREATETAB privilege, table T1 is not dropped, and the privileges that PGMR01 has as owner of the table are not deleted. Furthermore, the privileges that PGMR01 grants on T1 are not deleted. For example, PGMR01 can grant SELECT on T1 to OPER1 as long as PGMR01 owns of the table. Even when the privilege to create the table is revoked, the table remains, the privilege remains, and OPER1 can still access T1. Example: Consider the following REVOKE scenario: 1. Grant #1: SYSADM, SA01, grants SELECT on TABLE1 to USER01 with the GRANT option. 2. Grant #2: USER01 grants SELECT on TABLE1 to USER02 with the GRANT option. 3. Grant #3: USER02 grants SELECT on TABLE1 back to SA01. 4. USER02 then revokes SELECT on TABLE1 from SA01. The cascade REVOKE process of Grant #3 determines if SA01 granted SELECT to anyone else. It locates Grant #1. Because SA01 did not have SELECT from any
4. DB2 does not cascade a revoke of SYSADM authority from the installation SYSADM authorization IDs.
180
Administration Guide
other source, this grant is revoked. The cascade REVOKE process then locates Grant #2 and revokes it for the same reason. In this scenario, the single REVOKE action by USER02 triggers and results in the cascade removal of all the grants even though SA01 has the SYSADM authority. The SYSADM authority is not considered.
As in the diagram, suppose that DBUTIL1 issues the GRANT statement at Time 1 and that DBUTIL2 issues the GRANT statement at Time 2. DBUTIL1 and DBUTIL2 both use the following statement to issue the grant:
GRANT CREATETAB ON DATABASE DB1 TO PGMR01 WITH GRANT OPTION;
At Time 3, PGMR01 grants the privilege to OPER1 by using the following statement:
GRANT CREATETAB ON DATABASE DB1 TO OPER1;
After Time 3, DBUTIL1's authority is revoked, along with all of the privileges and authorities that DBUTIL1 granted. However, PGMR01 also has the CREATETAB privilege from DBUTIL2, so PGMR01 does not lose the privilege. The following criteria determine whether OPER1 loses the CREATETAB privilege when DBUTIL1s authority is revoked: v If Time 3 comes after Time 2, OPER1 does not lose the privilege. The recorded dates and times show that, at Time 3, PGMR01 could have granted the privilege entirely on the basis of the privilege that was granted by DBUTIL2. That privilege was not revoked. v If Time 3 is precedes Time 2, OPER1 does lose the privilege. The recorded dates and times show that, at Time 3, PGMR01 could have granted the privilege only on the basis of the privilege that was granted by DBUTIL1. That privilege was revoked, so the privileges that are dependent on it are also revoked.
181
However, you might want to revoke only privileges that are granted by a certain ID. To revoke privileges that are granted by DBUTIL1 and to leave intact the same privileges if they were granted by any other ID, use the following statement:
REVOKE CREATETAB, CREATETS ON DATABASE DB1 FROM PGMR01 BY DBUTIL1;
182
Administration Guide
For stored procedures, a trigger package that refers to the stored procedure in a CALL statement can have dependencies. | | | | | | For sequences, the following objects that are owned by the revokee can have dependencies: v Triggers that contain NEXT VALUE or PREVIOUS VALUE expressions that specify a sequence v Inline SQL routines that contain NEXT VALUE or PREVIOUS VALUE expressions that specify a sequence One way to ensure that the REVOKE statement succeeds is to drop the object that has a dependency on the privilege. To determine which objects are dependent on which privileges before attempting the revoke, use the following SELECT statements. For a distinct type: v List all tables that are owned by the revokee USRT002 that contain columns that use the distinct type USRT001.UDT1:
SELECT * FROM SYSIBM.SYSCOLUMNS WHERE TBCREATOR = 'USRT002' AND TYPESCHEMA = 'USRT001' AND TYPENAME = 'UDT1' AND COLTYPE = 'DISTINCT';
v List the user-defined functions that are owned by the revokee USRT002 that contain a parameter that is defined as distinct type USRT001.UDT1:
SELECT * FROM SYSIBM.SYSPARMS WHERE OWNER = 'USRT002' AND TYPESCHEMA = 'USRT001' AND TYPENAME = 'UDT1' AND ROUTINETYPE = 'F';
v List the stored procedures that are owned by the revokee USRT002 that contain a parameter that is defined as distinct type USRT001.UDT1:
SELECT * FROM SYSIBM.SYSPARMS WHERE OWNER = 'USRT002' AND TYPESCHEMA = 'USRT001' AND TYPENAME = 'UDT1' AND ROUTINETYPE = 'P';
| | | | | | |
v List the sequences that are owned by the revokee USRT002 that contain a parameter that is defined as distinct type USRT001.UDT1:
SELECT SYSIBM.SYSSEQUENCES.SCHEMA, SYSIBM.SYSSEQUENCES.NAME FROM SYSIBM.SYSSEQUENCES, SYSIBM.SYSDATATYPES WHERE SYSIBM.SYSSEQUENCES.DATATYPEID = SYSIBM.SYSDATATYPES.DATATYPEID AND SYSIBM.SYSDATATYPES.SCHEMA ='USRT001' AND SYSIBM.SYSDATATYPES.NAME ='UDT1';
For a user-defined function: v List the user-defined functions that are owned by the revokee USRT002 that are sourced on user-defined function USRT001.SPECUDF1:
SELECT * FROM SYSIBM.SYSROUTINES WHERE OWNER = 'USRTOO2' AND SOURCESCHEMA = 'USRTOO1' AND SOURCESPECIFIC = 'SPECUDF1' AND ROUTINETYPE = 'F';
v List the views that are owned by the revokee USRT002 that use user-defined function USRT001.SPECUDF1:
183
SELECT * FROM SYSIBM.SYSVIEWDEP WHERE DCREATOR = 'USRTOO2' AND BSCHEMA = 'USRT001' AND BNAME = 'SPECUDF1' AND BTYPE = 'F';
v List the tables that are owned by the revokee USRT002 that use user-defined function USRT001.A_INTEGER in a check constraint or user-defined default clause:
SELECT * FROM SYSIBM.SYSCONSTDEP WHERE DTBCREATOR = 'USRT002' AND BSCHEMA = 'USRT001' AND BNAME = 'A_INTEGER' AND BTYPE = 'F';
v List the trigger packages that are owned by the revokee USRT002 that use user-defined function USRT001.UDF4:
SELECT * FROM SYSIBM.SYSPACKDEP WHERE DOWNER = 'USRT002' AND BQUALIFIER = 'USRT001' AND BNAME = 'UDF4' AND BTYPE = 'F';
For a JAR (Java class for a routine): List the routines that are owned by the revokee USRT002 and that use a JAR named USRT001.SPJAR:
SELECT * FROM SYSIBM.SYSROUTINES WHERE OWNER = 'USRT002' AND JARCHEMA = 'USRT001' AND JAR_ID = 'SPJAR';
For a stored procedure that is used in a trigger package: List the trigger packages that refer to the stored procedure USRT001.WLMOCN2 that is owned by the revokee USRT002:
SELECT * FROM SYSIBM.SYSPACKDEP WHERE DOWNER = 'USRT002' AND BQUALIFIER = 'USRT001' AND BNAME = 'WLMLOCN2' AND BTYPE = 'O';
| | | | | | | | | | | | | | | |
For a sequence: v List the sequences that are owned by the revokee USRT002 and that use a trigger named USRT001.SEQ1:
SELECT * FROM SYSIBM.SYSPACKDEP WHERE BNAME = 'SEQ1' BQUALIFIER = 'USRT001' BTYPE = 'Q' DOWNER = 'USRT002' DTYPE = 'T';
v List the sequences that are owned by the revokee USRT002 and that use a inline SQL routine named USRT001.SEQ1:
SELECT * FROM SYSIBM.SYSSEQUENCESDEP WHERE DCREATOR = 'USRT002' DTYPE = 'F' BNAME = 'SEQ1' BSCHEMA = 'USRT001';
184
Administration Guide
185
view, but not necessarily any privileges on the base table. If SYSADM is revoked from IDADM, the SELECT privilege on TABLX is gone and the view is dropped. If one ID creates a view for another ID, the catalog table SYSIBM.SYSTABAUTH needs either one or two rows to record the associated privileges. The number of rows that DB2 uses to record the privilege is determined by the following criteria: v If IDADM creates a view for OPER when OPER has enough privileges to create the view by itself, only one row is inserted in SYSTABAUTH. The row shows only that OPER granted the required privileges. v If IDADM creates a view for OPER when OPER does not have enough privileges to create the view by itself, two rows are inserted in SYSTABAUTH. One row shows IDADM as GRANTOR and OPER as GRANTEE of the SELECT privilege. The other row shows any other privileges that OPER might have on the view because of privileges that are held on the base table. Invalidated and inoperative application plans and packages: If the owner of an application plan or package loses a privilege that is required by the plan or package, and the owner does not have that privilege from another source, DB2 invalidates the plan or package. Example: Suppose that OPER2 has the SELECT and INSERT privileges on table T1 and creates a plan that uses SELECT, but not INSERT. When privileges are revoked from OPER2, the plan is affected in the following ways: v If the SELECT privilege is revoked, DB2 invalidates the plan. v If the INSERT privilege is revoked, the plan is unaffected. v If the revoked privilege was EXECUTE on a user-defined function, DB2 marks the plan or package inoperative instead of invalid. Implications for caching: If authorization data is cached for a package and an ID loses EXECUTE authority on the package, that ID is removed from the cache. Similarly, if authorization data is cached for routines, a revoke or cascaded revoke of EXECUTE authority on a routine, or on all routines in a schema (schema.*), from any ID causes the ID to be removed from the cache. If authorization data is cached for plans, a revoke of EXECUTE authority on the plan from any ID causes the authorization cache to be invalidated. If an application is caching dynamic SQL statements, and a privilege is revoked that was needed when the statement was originally prepared and cached, that statement is removed from the cache. Subsequent PREPARE requests for that statement do not find it in the cache and therefore execute a full PREPARE. If the plan or package is bound with KEEPDYNAMIC(YES), which means that the application does not need to explicitly re-prepare the statement after a commit operation, you might get an error on an OPEN, DESCRIBE, or EXECUTE of that statement following the next commit operation. The error can occur because a prepare operation is performed implicitly by DB2. If you no longer have sufficient authority for the prepare, the OPEN, DESCRIBE, or EXECUTE request fails. Revoking SYSADM from installation SYSADM: If you REVOKE SYSADM authority from an ID with installation SYSADM authority, DB2 does not cascade the revoke. Therefore, you can change the ID that holds installation SYSADM authority or delete extraneous IDs with SYSADM authority without cascading the revoke that these processes required.
186
Administration Guide
To change the ID that holds installation SYSADM authority, perform the following steps: 1. Select a new ID that you will grant installation SYSADM authority. 2. Grant SYSADM authority to the ID that you selected. 3. Revoke SYSADM authority from the ID that currently holds installation SYSADM authority. 4. Update the SYSTEM ADMIN 1 or SYSTEM ADMIN 2 field on installation panel DSNTIPP with the new ID that you want to grant installation SYSADM authority. To delete extraneous IDs with SYSADM authority, perform the following steps: 1. Write down the ID that currently holds installation SYSADM authority. 2. Change the authority of the ID that you want to delete from SYSADM to installation SYSADM. You can change the authority by updating the SYSTEM ADMIN 1 or SYSTEM ADMIN 2 field on installation panel DSNTIPP. Replace the ID that you wrote down in step 1 with the ID that you want to delete. 3. Revoke SYSADM authority from the ID that you want to delete. 4. Change the SYSTEM ADMIN 1 or SYSTEM ADMIN 2 field on installation panel DSNTIPP back to its original value. For more information on installation panel DSNTIPP, see DB2 Installation Guide.
For descriptions of the columns of each table, see Appendix F of DB2 SQL Reference.
187
| |
Periodically, you should compare the list of IDs that is retrieved by this SQL code with the following lists: v Lists of users from subsystems that connect to DB2 (such as IMS, CICS, and TSO) v Lists of RACF groups v Lists of users from other DBMSs that access your DB2 If DB2 lists IDs that do not exist elsewhere, you should revoke their privileges.
188
Administration Guide
You can query the catalog to find duplicate grants on objects. If multiple grants clutter your catalog, consider eliminating unnecessary grants. You can use the following SQL statement to retrieve duplicate grants on plans:
SELECT GRANTEE, NAME, COUNT(*) FROM SYSIBM.SYSPLANAUTH GROUP BY GRANTEE, NAME HAVING COUNT(*) > 2 ORDER BY 3 DESC;
This statement orders the duplicate grants by frequency, so that you can easily identify the most duplicated grants. Similar statements for other catalog tables can retrieve information about multiple grants on other types of objects.
To retrieve all IDs that can change the sample employee table (IDs with administrative authorities and IDs to which authority is explicitly granted), issue the following statement:
SELECT DISTINCT GRANTEE FROM SYSIBM.SYSTABAUTH WHERE TTNAME = 'EMP' AND TCREATOR = 'DSN8810' AND GRANTEETYPE = ' ' AND (ALTERAUTH <> ' ' OR DELETEAUTH <> ' ' OR INSERTAUTH <> ' ' OR UPDATEAUTH <> ' ') UNION SELECT GRANTEE FROM SYSIBM.SYSUSERAUTH WHERE SYSADMAUTH <> ' ' UNION SELECT GRANTEE FROM SYSIBM.SYSDBAUTH WHERE DBADMAUTH <> ' ' AND NAME = 'DSN8D81A';
To retrieve the columns of DSN8810.EMP for which update privileges have been granted on a specific set of columns, issue the following statement:
SELECT DISTINCT COLNAME, GRANTEE, GRANTEETYPE FROM SYSIBM.SYSCOLAUTH WHERE CREATOR='DSN8810' AND TNAME='EMP' ORDER BY COLNAME;
The character in the GRANTEETYPE column shows whether the privileges have been granted to an authorization ID (blank) or are used by an application plan or package (P). To retrieve the IDs that have been granted the privilege of updating one or more columns of DSN8810.EMP, issue the following statement:
189
SELECT DISTINCT GRANTEE FROM SYSIBM.SYSTABAUTH WHERE TTNAME = 'EMP' AND TCREATOR='DSN8810' AND GRANTEETYPE=' ' AND UPDATEAUTH <> ' ';
The query returns only the IDs to which update privileges have been specifically granted. It does not return IDs that have the privilege because of SYSADM or DBADM authority. You could include them by forming a union with additional queries, as shown in the following example:
SELECT DISTINCT GRANTEE FROM SYSIBM.SYSTABAUTH WHERE TTNAME = 'EMP' AND TCREATOR = 'DSN8810' AND GRANTEETYPE = ' ' AND UPDATEAUTH <> ' ' UNION SELECT GRANTEE FROM SYSIBM.SYSUSERAUTH WHERE SYSADMAUTH <> ' ' UNION SELECT GRANTEE FROM SYSIBM.SYSDBAUTH WHERE DBADMAUTH <> ' ' AND NAME = 'DSN8D81A';
You can write a similar statement to retrieve the IDs that are authorized to access a user-defined function. To retrieve the IDs that are authorized to access user-defined function UDFA in schema SCHEMA1, issue the following statement:
SELECT DISTINCT GRANTEE FROM SYSIBM.SYSROUTINEAUTH WHERE SPECIFICNAME='UDFA' AND SCHEMA='SCHEMA1' AND GRANTEETYPE=' ' AND ROUTINETYPE ='F';
To retrieve the tables, views, and aliases that PGMR001 owns, issue the following statement:
SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR = 'PGMR001';
190
Administration Guide
The preceding query does not distinguish between plans and packages. To identify a package, use the COLLID column of table SYSTABAUTH, which names the collection in which a package resides and is blank for a plan. A plan or package can refer to the table indirectly, through a view. To find all views that refer to the table, perform the following steps: 1. Issue the following query:
SELECT DISTINCT DNAME FROM SYSIBM.SYSVIEWDEP WHERE BTYPE = 'T' AND BCREATOR = 'DSN8810' AND BNAME = 'EMP';
2. Write down the names of the views that satisfy the query. These values are instances of DNAME_list. 3. Find all plans and packages that refer to those views by issuing a series of SQL statements. For each instance of DNAME_list, issue the following statement:
SELECT DISTINCT GRANTEE FROM SYSIBM.SYSTABAUTH WHERE GRANTEETYPE = 'P' AND TCREATOR = 'DSN8810' AND TTNAME = DNAME_list;
The keyword USER in that statement is equal to the value of the primary authorization ID. To include tables that can be read by a secondary ID, set the current SQLID to that secondary ID before querying the view. To make the view available to every ID, issue the following GRANT statement:
GRANT SELECT ON MYSELECTS TO PUBLIC;
Similar views can show other privileges. This view shows privileges over columns:
CREATE VIEW MYCOLS (OWNER, TNAME, CNAME, REMARKS, LABEL) AS SELECT DISTINCT TBCREATOR, TBNAME, NAME, REMARKS, LABEL FROM SYSIBM.SYSCOLUMNS, SYSIBM.SYSTABAUTH WHERE TCREATOR = TBCREATOR AND TTNAME = TBNAME AND GRANTEETYPE = ' ' AND GRANTEE IN (USER,'PUBLIC',CURRENT SQLID,'PUBLIC*');
191
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Multilevel security
Important The following information about multilevel security is specific to DB2. It does not describe all aspects of multilevel security. However, this specific information assumes that you have general knowledge of multilevel security. Before implementing multilevel security on your DB2 subsystem, read z/OS Planning for Multilevel Security and the Common Criteria.
192
Administration Guide
| | | | | | | | | | | | | | | | | # # # | | | | | | | | | | | | | | | | | | | | | | | |
Security labels
Multilevel security restricts access to an object or a row based on the security label of the object or row and the security label of the user. For local connections, the security label of the user is the security label that the user specified during sign-on. This security label is associated with the DB2 primary authorization ID and accessed from the RACF ACEE control block. If no security label is specified during sign-on, the security label is the users default security label. For TCP/IP connections, the security label of the user can be defined by the security zone. IP addresses are grouped into security zones on the DB2 server. Users that come in on an IP address have the security label that is associated with the security zone that the IP address is grouped under. For SNA connections, the default security label for the user is used instead of the security label that the user signed on with. Security labels are based on security levels and security categories. You can use the Common Criteria (COMCRIT) environments subsystem parameter to require that all tables created in the subsystem are defined with security labels. When defining security labels, do not include national characters, such as @, #, and $. Use of these characters in security labels may cause CCSID conversions to terminate abnormally. Security levels: Along with security categories, hierarchical security levels are used as a basis for mandatory access checking decisions. When you define the security level of an object, you define the degree of sensitivity of that object. Security levels ensure that an object of a certain security level is protected from access by a user of a lower security level. For information about defining security levels, see z/OS Security Server RACF Security Administrator's Guide. Security categories: Security categories are the non-hierarchical basis for mandatory access checking decisions. When making security decisions, mandatory access control checks whether one set of security categories includes the security categories that are defined in a second set of security categories.
193
| | | | | | | | | | | | | | | | # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Discretionary access checking: Once the user passes the mandatory access check, a discretionary check follows. The discretionary access check restricts access to objects based on the identity of a user and the groups to which the user belongs. The discretionary access check ensures that the user is identified as having a need to know for the requested resource. The check is discretionary because a user with a certain access permission is capable of passing that permission to any other user.
194
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example: Suppose that the security level secret for the security label HIGH is greater than the security level sensitive for the security label MEDIUM. Also, suppose that the security label HIGH includes the security categories Project_A, Project_B, and Project_C, and that the security label MEDIUM includes the security categories Project_A and Project_B. The security label HIGH dominates the security label MEDIUM because both conditions for dominance are true. Example: Suppose that the security label HIGH includes the security categories Project_A, Project_B, and Project_C, and that the security label MEDIUM includes the security categories Project_A and Project_Z. In this case, the security label HIGH does not dominate the security label MEDIUM because the set of security categories that define the security label HIGH does not contain the security category Project_Z. The security labels are disjoint.
Write-down control
Mandatory access checking prevents users from declassifying information by not allowing a user to write to an object unless the security label of the user and the security label of the object are equivalent. You can override this security feature, known as write-down control, for specific users by granting write-down privilege to those users. Example: Suppose that user1 has a security label of HIGH and that row_x has a security label of MEDIUM. Because the security label of the user and the security label of the row are not equivalent, user_1 cannot write to row_x. Therefore, write-down control prevents user1 from declassifying the information that is in row_x. Example: Suppose that user2 has a security label of MEDIUM and that row_x has a security label of MEDIUM. Because the security label of the user and the security label of the row are equivalent, user1 can read from and write to row_x. However, user2 cannot change the security label for row_x unless user2 has write-down privilege. Therefore write-down control prevents user2 from declassifying the information that is in row_x. Granting write-down privilege: To grant write-down privilege to users, you need to define a profile and then allow users to access the profile. Example: To grant write down privilege to users, perform the following steps: 1. Define a profile. The following RACF command defines the IRR.WRITEDOWN.BYUSER profile:
RDEFINE FACILITY IRR.WRITEDOWN.BYUSER UACC(NONE)
2. Allow specified users to access the profile. The following RACF command allows a group of users to access the IRR.WRITEDOWN.BYUSER profile:
PERMIT IRR.WRITEDOWN.BYUSER ID(USRT051 USRT052 USRT054 USRT056 USRT058 USRT060 USRT062 USRT064 USRT066 USRT068 USRT041) ACCESS(UPDATE) CLASS(FACILITY)
195
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
External access control and multilevel security with row-level granularity In this combination, external access control (such as the RACF access control module) is used for authorization at the DB2 object level. External access control also uses security labels to perform mandatory access checking on DB2 objects as part of multilevel security. Multilevel security is also implemented on the row level within DB2. For information about the access control authorization exit, see Access control authorization exit routine on page 1065 and DB2 RACF Access Control Module Guide.
196
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example: A collection should dominate a package. Example: A subsystem should dominate a database. That database should dominate a table space. That table space should dominate a table. That table should dominate a column. Example: If a view is based on a single table, the table should dominate the view. However, if a view is based on multiple tables, the view should dominate the tables. Recommendation: Define the security label SYSMULTI for DB2 subsystems that are accessed by users with different security labels and tables that require row-level granularity. 2. Define security labels and associate users with the security labels in RACF. If you are using a TCP/IP connection, you need to define security labels in RACF for the security zones into which IP addresses are grouped. These IP addressed represent remote users. Recommendation: Give users with SYSADM, SYSCTRL, and SYSOPR authority the security label of SYSHIGH. 3. Activate the SECLABEL class in RACF. If you want to enforce write-down control, turn on write-down control in RACF. 4. Install the external security access control authorization exit routine (DSNX@XAC), such as the RACF access control module.
197
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v If write-down control is not enabled, all users with valid security labels are equivalent to users with the write-down privilege. Requirement: You must have z/OS Version 1 Release 5 or later to use DB2 authorization with multilevel security with row-level granularity. Defining multilevel security on tables: You can use multilevel security with row-level checking to control table access by creating or altering a table to have a column with the AS SECURITY LABEL attribute. Tables with multilevel security in effect can be dropped by using the DROP TABLE statement. Users must have a valid security label to execute CREATE TABLE, ALTER TABLE, and DROP TABLE statements on tables with multilevel security enabled. For information about defining security labels and enabling the security label class, see z/OS Planning for Multilevel Security. Indexing the security label column: The performance of tables that you create and alter can suffer if the security label is not included in indexes. The security label column is used whenever a table with multilevel security enabled is accessed. Therefore, the security label column should be included in indexes on the table. If you do not index the security label column, you cannot maintain index-only access. CREATE TABLE: When a user with a valid security label creates a table, the user can implement row-level security by including a security label column. The security label column can have any name, but it must be defined as CHAR(8) and NOT NULL WITH DEFAULT. It also must be defined with the AS SECURITY LABEL clause. Example: To create a table that is named TABLEMLS1 and that has row-level security enabled, issue the following statement:
CREATE TABLE TABLEMLS1 (EMPNO CHAR(6) EMPNAME VARCHAR(20) DEPTNO VARCHAR(5) SECURITY CHAR(8) PRIMARY KEY (EMPNO) IN DSN8D71A.DSN8S71D; NOT NULL, NOT NULL, NOT NULL WITH DEFAULT AS SECURITY LABEL, )
After the user specifies the AS SECURITY LABEL clause on a column, users can indicate the security label for each row by entering values in that column. When a user creates a table and includes a security label column, SYSIBM.SYSTABLES indicates that the table has row-level security enabled. Once a user creates a table with a security label column, the security on the table cannot be disabled. The table must be dropped and recreated to remove this protection. ALTER TABLE: A user with a valid security label can implement row-level security on an existing table by adding a security label column to the existing table. The security label column can have any name, but it must be defined as CHAR(8) and NOT NULL WITH DEFAULT. It also must be defined with the AS SECURITY LABEL clause. Example: Suppose that the table EMP does not have row-level security enabled. To alter EMP so that it has row-level security enabled, issue the following statement:
ALTER TABLE EMP ADD SECURITY CHAR(8) NOT NULL WITH DEFAULT AS SECURITY LABEL;
After a user specifies the AS SECURITY LABEL clause on a column, row-level security is enabled on the table and cannot be disabled. The security label for
198
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
existing rows in the table at the time of the alter is the same as the security label of the user that issued the ALTER TABLE statement. Important: Plans, packages, and dynamic statements are invalidated when a table is altered to add a security label column. DROP TABLE: When a user with a valid security label drops a table that has row-level security in effect, the system generates an audit record. Row-level security does not affect the success of a DROP statement; the users privilege on the table determines whether the statement succeeds. For more information about these SQL statements and multilevel security, see the DB2 SQL Reference.
v Field procedures, edit procedures, validation procedures, and multilevel security on page 206 v Triggers and multilevel security on page 206 For information about defining security labels and enabling the security label class, see z/OS Planning for Multilevel Security.
199
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 52. Sample data from DSN8710.EMP (continued) EMPNO 000190 000200 000210 000330 LASTNAME BROWN JONES LUTZ LEE WORKDEPT D11 D11 D11 E21 SECURITY HIGH MEDIUM LOW MEDIUM
Now, suppose that Alan, Beth, and Carlos each submit the following SELECT statement:
SELECT LASTNAME FROM EMP ORDER BY LASTNAME;
Because Alan has the security label HIGH, he receives the following result:
BROWN HAAS JONES LEE LUTZ
Because Beth has the security label MEDIUM, she receives the following result:
HAAS JONES LEE LUTZ
Beth does not see BROWN in her result set because the row with that information has a security label of HIGH. Because Carlos has the security label LOW, he receives the following result:
HAAS LUTZ
Carlos does not see BROWN, JONES, or LEE in his result set because the rows with that information have security labels that dominate Carloss security label. Although Beth and Carlos do not receive the full result set for the query, DB2 does not return an error code to Beth or Carlos.
200
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Now, suppose that Alan, Beth, and Carlos each submit the following INSERT statement:
INSERT INTO DSN8710.EMP(EMPNO, LASTNAME, WORKDEPT, SECURITY) VALUES('099990', 'SMITH', 'C01', 'MEDIUM');
Because Alan does not have write-down privilege, Alan cannot choose the security label of the row that he inserts. Therefore DB2 ignores the security label of MEDIUM that is specified in the statement. The security label of the row becomes HIGH because Alans security label is HIGH. Because Beth has write-down privilege on the table, she can specify the security label of the new row. In this case, the security label of the new row is MEDIUM. If Beth submits a similar INSERT statement that specifies a value of LOW for the security column, the security label for the row becomes LOW. Because Carlos does not have write-down privilege, Carlos cannot choose the security label of the row that he inserts. Therefore DB2 ignores the security label of MEDIUM that is specified in the statement. The security label of the row becomes LOW because Carloss security label is LOW. Considerations for INSERT from a fullselect: For statements that insert the result of a fullselect, DB2 does not return an error code if the fullselect contains a table with a security label column. Considerations for SELECT...FROM...INSERT statements: For statements that insert rows and select the inserted rows, DB2 avoids returning some error codes. If the fullselect includes a table with a security label column and the object of the insert does not contain a security label column, DB2 does not return an error code. If the user has write-down privilege or write-down is not in effect, the security label of the user might not dominate the security label of the row. For statements that insert rows and select the inserted rows, the INSERT statement succeeds in this. However, the inserted row is not returned. Considerations for INSERT with subselect: If you insert data into a table that does not have a security label column, but a subselect in the INSERT statement does include a table with a security label column, row-level checking is performed for the subselect. However, the inserted rows will not be stored with a security label column.
201
| | | | | | | | | | | | | | | | | | | | | | | | # # # # # # # # # # # # # # | # # # | | #
If the user has write-down privilege or write-down control is not enabled, the row is updated and the user can set the security label of the row to any valid security label. If the user does not have write-down privilege and write-down control is enabled, the row is not updated. v If the security label of the row dominates the security label of the user, the row is not updated. Example: Suppose that Alan has a security label of HIGH and write-down privilege defined in RACF, that Beth has a security label of MEDIUM and write-down privilege defined in RACF, and that Carlos has a security label of LOW. Write-down control is enabled. Suppose that DSN8710.EMP contains the data that is shown in Table 53 and that the SECURITY column has been declared with the AS SECURITY LABEL clause.
Table 53. Sample data from DSN8710.EMP EMPNO 000190 000200 000210 LASTNAME BROWN JONES LUTZ WORKDEPT D11 D11 D11 SECURITY HIGH MEDIUM LOW
Now, suppose that Alan, Beth, and Carlos each submit the following UPDATE statement:
UPDATE DSN8710.EMP SET DEPTNO='X55', SECURITY='MEDIUM' WHERE DEPTNO='D11';
Because Alan has a security label that is equivalent to the security label of the row with HIGH security, the update on that row succeeds. Because Alan has a security label that dominates the rows with security labels of MEDIUM and LOW, his write-down privilege determines whether these rows are updated. Alan has the write-down privilege that is required to set the security label to any value, so the update succeeds for these rows and the security label for all of the rows becomes MEDIUM. The results of Alans update are shown in Table 54.
Table 54. Sample data from DSN8710.EMP after Alans update EMPNO 000190 000200 000210 EMPNAME BROWN JONES LUTZ DEPTNO X55 X55 X55 SECURITY MEDIUM MEDIUM MEDIUM
Because the row with the security label of HIGH dominates Beths security label, the update fails for that row, which causes the entire update to fail. Because the rows with the security labels of MEDIUM and HIGH dominate Carloss security label, the update fails for those rows, which causes the entire update to fail. Recommendation: To avoid failed updates, qualify the rows that you want to update with the following predicate, for the security label column SECLABEL:
WHERE SECLABEL=GETVARIABLE('SYSIBM.SECLABEL');
202
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Using this predicate avoids failed updates because it ensures that the users security label is equivalent to the security label of the rows that DB2 attempts to update.
Now, suppose that Alan, Beth, and Carlos each submit the following DELETE statement:
DELETE FROM DSN8710.EMP WHERE DEPTNO='D11';
Because Alan has a security label that dominates the rows with security labels of MEDIUM and LOW, his write-down privilege determines whether these rows are deleted. Alan does not have write-down privilege, so the delete fails for these rows. Because Alan has a security label that is equivalent to the security label of the row with HIGH security, the delete on that row succeeds. The results of Alans delete are shown in Table 56.
Table 56. Sample data from DSN8710.EMP after Alans delete EMPNO 000200 000210 EMPNAME JONES LUTZ DEPTNO D11 D11 SECURITY MEDIUM LOW
Because Beth has a security label that dominates the row with a security label of LOW, her write-down privilege determines whether this row is deleted. Beth has write-down privilege, so the delete succeeds for this row. Because Beth has a
Chapter 9. Controlling access to DB2 objects
203
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
security label that is equivalent to the security label of the row with MEDIUM security, the delete succeeds for that row. Because the row with the security label of HIGH dominates Beths security label, the delete fails for that row. The results of Beths delete are shown in Table 57.
Table 57. Sample data from DSN8710.EMP after Beths delete EMPNO 000190 EMPNAME BROWN DEPTNO D11 SECURITY HIGH
Because Carloss security label is LOW, the delete fails for the rows with security labels of MEDIUM and HIGH. Because Carlos has a security label that is equivalent to the security label of the row with LOW security, the delete on that row succeeds. The results of Carloss delete are shown in Table 58.
Table 58. Sample data from DSN8710.EMP after Carloss delete EMPNO 000190 000200 EMPNAME BROWN JONES DEPTNO D11 D11 SECURITY HIGH MEDIUM
Important: Do not omit the WHERE clause from DELETE statements. If you omit the WHERE clause from the DELETE statement, checking occurs for rows that have security labels. This checking behavior might have a negative impact on performance. For more information about these SQL statements and multilevel security, see the DB2 SQL Reference.
204
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
security label of the user does not dominate the security label of the row, the row is not unloaded and DB2 does not issue an error message. v For jobs with the DISCARD option, a qualifying row is discarded only if the user has the write-down privilege and the security label of the user dominates the security label of the row. For more information about utilities and multilevel security, see DB2 Utility Guide and Reference.
Alternatively, you can create views that give each user access only to the rows that include that users security label column. To do that, retrieve the value of the SYSIBM.SECLABEL session variable, and create a view that includes only the rows that match the session variable value. Example: To allow access only to the rows that match the users security label, use the following CREATE statement:
CREATE VIEW V2 AS SELECT * FROM ORDER WHERE SECURITY=GETVARIABLE('SYSIBM.SECLABEL');
205
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
206
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Recommendation: Use an unrestricted stack for DB2. An unrestricted stack is configured with an ID that is defined with a security label of SYSMULTI. A single z/OS system can concurrently run a mix of restricted and unrestricted stacks. Unrestricted stacks allow DB2 to use any security label to open sockets. All users on a TCP/IP connection have the security label that is associated with the IP address that is defined on the server. If a user requires a different security label, the user must enter through an IP address that has that security label associated with it. If you require multiple IP addresses on a remote z/OS server, a workstation, or a gateway, you can configure multiple virtual IP addresses. This strategy can increase the number of security labels that are available on a client. Remote users that access DB2 by using a TCP/IP network connection use the security label that is associated with the RACF SERVAUTH class profile when the remote user is authenticated. Security labels are assigned to the database access thread when the DB2 server authenticates the remote server by using the RACROUTE REQUEST = VERIFY service.
207
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Important: Built-in encryption functions work for data that is stored within DB2 subsystem and is retrieved from within that same DB2 subsystem. The encryption functions do not work for data that is passed into and out of a DB2 subsystem. This task is handled by DRDA data encryption, and it is separate from built-in data encryption functions.
Therefore, define the column for encrypted data as VARCHAR(32) FOR BIT DATA. If you use a password hint, DB2 requires an additional 32 bytes to store the hint. Example: Suppose that you have non-encrypted data in a column that is defined as VARCHAR(10). Use the following calculation to determine the column definition for storing the data in encrypted format with a password hint:
Maximum length of non-encrypted data Number of bytes to the next multiple of 8 24 bytes for encryption key 32 bytes for password hint Encrypted data column length 10 bytes 6 bytes 24 bytes 32 bytes -------72 bytes
Therefore, define the column for encrypted data as VARCHAR(72) FOR BIT DATA. For more information about encryption password hints, see Using password hints with column-level encryption on page 210 and Using password hints with value-level encryption on page 211.
208
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
passwords and password hints, the security of the encrypted data can be compromised in the DB2 catalog and in a trace report. ENCRYPT Indicates which column or columns require encryption. DB2 sets the password on the indicated data to the password that DB2 holds at the time a statement with the ENCRYPT keyword is issued. DECRYPT_BIT, DECRYPT_CHAR, DECRYPT_DB Checks for the correct password and decrypts data when the data is selected. For more information about the different decryption functions, see DB2 SQL Reference. When encrypted data is selected, DB2 must hold the same password that was held at the time of encryption to decrypt the data. To ensure that DB2 holds the correct password, issue a SET ENCRYPTION PASSWORD statement with the correct password immediately before selecting encrypted data. Example: Suppose that you need to create an employee table EMP that contains employee ID numbers in encrypted format. Suppose also that you want to set the password for all rows in an encrypted column to the host variable hv_pass. Finally, suppose that you want to select employee ID numbers in decrypted format. Perform the following steps: 1. Create the EMP table with the EMPNO column. The EMPNO column must be defined with the VARCHAR data type, must be defined FOR BIT DATA, and must be long enough to hold the encrypted data. The following statement creates the EMP table:
CREATE TABLE EMP (EMPNO VARCHAR(32) FOR BIT DATA);
2. Set the encryption password. The following statement sets the encryption password to the host variable :hv_pass:
SET ENCRYPTION PASSWORD = :hv_pass;
3. Use the ENCRYPT keyword to insert encrypted data into the EMP table by issuing the following statements:
INSERT INTO EMP (EMPNO) VALUES(ENCRYPT('47138')); INSERT INTO EMP (EMPNO) VALUES(ENCRYPT('99514')); INSERT INTO EMP (EMPNO) VALUES(ENCRYPT('67391'));
If you provide the correct password, DB2 returns the employee ID numbers in decrypted format.
2. Set the encryption password, so that the fullselect in the view definition can retrieve decrypted data. Use the following statement:
SET ENCRYPTION PASSWORD = :hv_pass;
Chapter 9. Controlling access to DB2 objects
209
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
3. Select the desired data from the view by using the following statement:
SELECT EMPNO FROM CLR_EMP;
Example: Suppose that the EMPNO column in the EMP table contains encrypted data and that you submitted a password hint when you inserted the data. Suppose that you cannot remember the encryption password for the data. Use the following statement to return the password hint:
SELECT GETHINT (EMPNO) FROM EMP;
Before the application displays the credit card number for a customer, the customer must enter the password. The application retrieves the credit card number by using the following statement:
SELECT DECRYPT_CHAR(CCN, :userpswd) FROM CUSTOMER WHERE NAME = :custname;
210
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
If the customer requests a hint about the password, the following query is used:
SELECT GETHINT(CCN) INTO :pswdhint FROM CUSTOMER WHERE NAME = :custname;
The value for pswdhint is set to 'Ski Holiday' and returned to the customer. Hopefully the customer can remember the password 'Tahoe' from this hint.
In both case, they are not equal. Example: However, if you use a < predicate to compare these values in encrypted format, you receive a different result than you will if you compare these two values in decrypted format:
Decrypted: Encrypted: 1234 < 5678 H71G < BF62 True False
To ensure that predicates such as >, <, and LIKE return accurate results, you must first decrypt the data.
211
| | | | | | | | | # # | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
1. Create a table to store the encrypted values and set the column-level encryption password by using the following statements:
CREATE TABLE ETEMP (C1 VARCHAR(124) FOR BIT DATA); SET ENCRYPTION PASSWORD :hv_pass;
2. Cast, encrypt, and insert the timestamp data by using the following statement:
INSERT INTO ETEMP VALUES ENCRYPT(CHAR(CURRENT TIMESTAMP));
3. Recast, decrypt, and select the timestamp data by using the following statement:
SELECT TIMESTAMP(DECRYPT_CHAR(C1)) FROM ETEMP;
Example: Next, suppose that one employee can work on multiple projects, and that you want to insert employee and project data into the table. To set the encryption password and insert data into the tables, use the following statements:
SET ENCRYPTION PASSWORD = :hv_pass; SELECT INTO :hv_enc_val FROM FINAL TABLE (INSERT INTO EMP VALUES (ENCRYPT('A7513'),'Super Prog')); INSERT INTO EMPPROJ VALUES (:hv_enc_val,'UDDI Project'); INSERT INTO EMPPROJ VALUES (:hv_enc_val,'DB2 UDB Version 10');
212
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
SELECT INTO :hv_enc_val FROM FINAL TABLE (INSERT INTO EMP VALUES (ENCRYPT('4NF18'),'Novice Prog')); INSERT INTO EMPPROJ VALUES (:hv_enc_val,'UDDI Project');
You can improve the performance of INSERT statements by avoiding unnecessary repetition of encryption processing. Note how the host variable hv_enc_val is defined in the SELECT INTO statement and then used in subsequent INSERT statements. If you need to insert a large number of rows that contain the same encrypted value, you might find that the repetitive encryption processing degrades performance. However, you can dramatically improve performance by encrypting the data, storing the encrypted data in a host variable, and inserting the host variable. Example: Next, suppose that you want to find the programmers who are working on the UDDI Project. Consider the following pair of SELECT statements: v Poor performance: The following query shows how not to write the query for good performance:
SELECT A.NAME, DECRYPT_CHAR(A.EMPNO) FROM EMP A, EMPPROJECT B WHERE DECRYPT_CHAR(A.EMPNO) = DECRYPT_CHAR(B.EMPNO) AND B.PROJECT ='UDDI Project';
Although the preceding query returns the correct results, it decrypts every EMPNO value in the EMP table and every EMPNO value in the EMPPROJ table where PROJECT = 'UDDI Project' to perform the join. For large tables, this unnecessary decryption is a significant performance problem. v Good performance: The following query produces the same result as the preceding query, but with significantly better performance. To find the programmers who are working on the UDDI Project, use the following statement :
SELECT A.NAME, DECRYPT_CHAR(A.EMPNO) FROM EMP A, EMPPROJ B WHERE A.EMPNO = B.EMPNO AND B.PROJECT ='UDDI Project';
Example: Next, suppose that you want to find the projects that the programmer with employee ID A7513 is working on. Consider the following pair of SELECT statements: v Poor performance: The following query requires DB2 to decrypt every EMPNO value in the EMPPROJ table to perform the join:
SELECT PROJECTNAME FROM EMPPROJ WHERE DECRYPT_CHAR(EMPNO) = 'A7513';
v Good performance: The following query encrypts the literal value in the predicate so that DB2 can compare it to encrypted values that are stored in the EMPNO column without decrypting the whole column. To find the projects that the programmer with employee ID A7513 is working on, use the following statement :
SELECT PROJECTNAME FROM EMPPROJ WHERE EMPNO = ENCRYPT('A7513');
213
214
Administration Guide
Note: Data definition control support also controls COMMENT statements and LABEL statements. The statements in Table 59 are a subset of statements that are referred to as data definition language. In this chapter, data definition language refers only to this subset of statements. For information about how to impose several degrees of control over data definition in applications and objects, see Controlling data definition on page 218. This chapter provides information on the following topics: v Registration tables on page 216 describes the columns of the ART and ORT. v Controlling data definition on page 218 describes various methods for controlling data definition. v Managing the registration tables and their indexes on page 226 describes how to create and manage the ART and ORT.
215
Registration tables
If you use data definition control support, you must create and maintain the application registration table (ART) and the object registration table (ORT). You register plans and package collections in the ART. You register the objects that are associated with the plans and collections in the ORT. DB2 consults these two registration tables before accepting a data definition statement from a process. If the registration tables indicate that the process is not allowed to create, alter, or drop a particular object, DB2 does not allow it. Columns of the ART and Columns of the ORT on page 217 describe the columns of the two registration tables. For more information about maintaining the ART and the ORT, see Managing the registration tables and their indexes on page 226.
APPLIDENTTYPE APPLICATIONDESC
Optional data. Provides a more meaningful description of each application than the eight-byte APPLIDENT column can contain. Indicates whether all data definition language should be accepted from this application. Indicates whether the application can supply a missing name part for objects that are named in the ORT. Applies only if REQUIRE FULL NAMES = NO. Optional data. Indicates the authorization ID that created the row. Optional data. Indicates when a row was created. If you use CURRENT TIMESTAMP, DB2 automatically enters the value of CURRENT TIMESTAMP When you load or insert a row. Optional data. Indicates the authorization ID that last changed the row. Optional data. Indicates when a row was changed. If you use CURRENT TIMESTAMP, DB2 automatically enters the value of CURRENT TIMESTAMP When you update a row.
DEFAULTAPPL
QUALIFIEROK
CREATOR1, 2 CREATETIMESTAMP1
CHANGER1, 2 CHANGETIMESTAMP1
216
Administration Guide
Table 60. Columns of the ART (continued) Column name Notes: 1. Optional columns are for administrator use. DB2 does not use these columns. 2. Because the CREATOR and CHANGER columns are CHAR(26), they are large enough for a three-part authorization ID. Separate each 8-byte part of the ID with a period in byte 9 and in byte 18. If you enter only the primary authorization ID, consider entering it right-justified in the field (that is, preceded by 18 blanks). Description
Optional data. Provides a more meaningful description of each application than the eight-byte APPLIDENT column can contain. Optional data. Indicates the authorization ID that created the row. Optional data. Indicates when a row was created. If you use CURRENT TIMESTAMP, DB2 automatically enters the value of CURRENT TIMESTAMP When you load or insert a row. Optional data. Indicates the authorization ID that last changed the row. Optional data. Indicates when a row was changed. If you use CURRENT TIMESTAMP, DB2 automatically enters the value of CURRENT TIMESTAMP When you update a row.
CREATOR1, 2 CREATETIMESTAMP1
CHANGER1, 2 CHANGETIMESTAMP1
Notes: 1. Optional columns are for administrator use. DB2 does not use these columns. 2. Because the CREATOR and CHANGER columns are CHAR(26), they are large enough for a three-part authorization ID. Separate each 8-byte part of the ID with a period in byte 9 and in byte 18. If you enter only the primary authorization ID, consider entering it right-justified in the field (that is, preceded by 18 blanks).
217
Enter data below: 1 INSTALL DD CONTROL SUPT. ===> NO 2 CONTROL ALL APPLICATIONS ===> 3 REQUIRE FULL NAMES ===> 4 UNREGISTERED DDL DEFAULT ===> YES - activate the support NO - omit DD control support NO YES or NO YES YES or NO ACCEPT Action for unregistered DDL: ACCEPT - allow it REJECT - prohibit it APPL - consult ART Used in ART/ORT Searches DSNRGCOL Qualifier for ART and ORT DSNRGFDB Database name DSN_REGISTER_APPL Table name DSN_REGISTER_OBJT Table name
5 6 7 8 9
ART/ORT ESCAPE CHARACTER REGISTRATION OWNER REGISTRATION DATABASE APPL REGISTRATION TABLE OBJT REGISTRATION TABLE
Note: ART = Application Registration Table ORT = Object Registration Table PRESS: ENTER to continue RETURN to exit HELP for more information
After you specify values on the installation panel, you enter the appropriate information in the ART and ORT to enable data definition control support. This section explains what values to enter on the DSNTIPZ installation panel and what data to enter in the ART and ORT. This section first explains the basic installation options in Installing data definition control support on page 219. Then, the section explains the specific options for the following four methods of controlling data definition control support: v Controlling data definition by application name on page 220 describes the simplest of the four methods for implementing data definition control. v Controlling data definition by application name with exceptions on page 221 describes how to give one or more applications almost total control over data definition. v Controlling data definition by object name on page 222 describes how to register all of the objects in a subsystem and have several applications control specific sets of objects. v Controlling data definition by object name with exceptions on page 224 describes how to control registered and unregistered objects. Finally, the section describes the optional task of registering sets of objects in Registering sets of objects on page 225. Recommendation: As you read through the relevant sections in this chapter, use the template in Figure 17 on page 219 to record the values that you specify on the DSNTIPZ installation panel.
218
Administration Guide
DSNTIPZ ===>
Enter data below: 1 INSTALL DD CONTROL SUPT. ===> 2 CONTROL ALL APPLICATIONS ===> 3 REQUIRE FULL NAMES ===> 4 UNREGISTERED DDL DEFAULT ===> YES - activate the support NO - omit DD control support YES or NO YES or NO Action for unregistered DDL: ACCEPT - allow it REJECT - prohibit it APPL - consult ART Used in ART/ORT Searches Qualifier for ART and ORT Database name Table name Table name
5 6 7 8 9
ART/ORT ESCAPE CHARACTER REGISTRATION OWNER REGISTRATION DATABASE APPL REGISTRATION TABLE OBJT REGISTRATION TABLE
Note: ART = Application Registration Table ORT = Object Registration Table PRESS: ENTER to continue RETURN to exit HELP for more information
2. Enter the names for the registration tables in your DB2 subsystem, their owners, and the databases in which they reside for options 6, 7, 8, and 9 on the DSNTIPZ installation panel. The default names are as follows:
6 7 8 9 REGISTRATION OWNER REGISTRATION DATABASE APPL REGISTRATION TABLE OBJT REGISTRATION TABLE ===> ===> ===> ===> DSNRGCOL DSNRGFDB DSN_REGISTER_APPL DSN_REGISTER_OBJT
You can accept the default names or assign names of your own. If you specify your own table names, each name can have a maximum of 17 characters. This chapter uses the default names. 3. If you want to use the percent character (%) or the underscore character (_) as a regular character in the ART or ORT, enter an escape character for option 5 on the DSNTIPZ installation panel. You can use any special character other than underscore or percent as the escape character. Example: To use the pound sign (#) as an escape character, fill in option 5 as follows:
5 ART/ORT ESCAPE CHARACTER ===> #
After you specify the pound sign as an escape character, the pound sign can be used in names in the same way that an escape character is used in an SQL LIKE predicate. For more information about escape characters and the percent and underscore characters, see DB2 SQL Reference. 4. Register plans, packages, and objects in the ART and ORT, and enter values for the three other options on the DSNTIPZ installation panel as follows:
219
2 CONTROL ALL APPLICATIONS ===> 3 REQUIRE FULL NAMES ===> 4 UNREGISTERED DDL DEFAULT ===>
Choose the values to enter and the plans, packages, and objects to register based on the control method that you plan to use: v If you want to control data definition by application name, perform the steps in one of the following sections: For registered applications that have total control over all data definition language in the DB2 subsystem, see Controlling data definition by application name. For registered applications that have total control with some exceptions, see Controlling data definition by application name with exceptions on page 221. If you also want to control data definition by registering sets of objects, perform the steps in Registering sets of objects on page 225. v If you want to control data definition by object name, perform the steps in one of the following sections: For subsystems in which all objects are registered and controlled by name, see Controlling data definition by object name on page 222. If you also want to control data definition by registering sets of objects, perform the steps in Registering sets of objects on page 225. For subsystems in which some specific objects are registered and controlled, and data definition language is accepted for objects that are not registered, see Controlling data definition by object name with exceptions on page 224. If you also want to control data definition by registering sets of objects, perform the steps in Registering sets of objects on page 225.
When you specify YES, only package collections or plans that are registered in the ART are allowed to use data definition statements. 2. In the ART, register all package collections and plans that you will allow to issue DDL statements, and enter the value Y in the DEFAULTAPPL column for these package collections. You must supply values for the APPLIDENT, APPLIDENTTYPE, and DEFAULTAPPL columns of the ART. You can enter information in other columns for your own use as indicated in Table 60 on page 216. Example: Suppose that you want all data definition language in your subsystem to be issued only through certain applications. The applications are identified by the following application plan names, collection-IDs, and patterns: PLANA PACKB TRULY% TR% The name of an application plan The collection-ID of a package A pattern name for any plan name beginning with TRULY A pattern name for any plan name beginning with TR
220
Administration Guide
An inactive table entry: If the row with TR% for APPLIDENT in Table 62 contains the value Y for DEFAULTAPPL, any plan with a name beginning with TR can execute data definition language. If DEFAULTAPPL is later changed to N to disallow that use, the changed row does not prevent plans beginning with TR from using data definition language; the row merely fails to allow that specific use. In this case, the plan TRXYZ is not allowed to use data definition language. However, the plan TRULYXYZ is allowed to use data definition language, by the row with TRULY% specified for APPLIDENT.
When you specify NO, you allow unregistered applications to use data definition statements on some objects. 2. On the DSNTIPZ installation panel, specify the following for option 4:
4 UNREGISTERED DDL DEFAULT ===> APPL
When you specify APPL, you restrict the use of data definition statements for objects that are not registered in the ORT. If an object is registered in the ORT, any applications that are not registered in the ART can use data definition language on the object. However, if an object is not registered in the ORT, only applications that are registered in the ART can use data definition language on the object. 3. In the ART, register package collections and plans that you will allow to issue data definition statements on any object. Enter the value Y in the DEFAULTAPPL column for these package collections. Applications that are registered in the ART retain almost total control over data definition. Objects that are registered in the ORT are the only exceptions. See step 2 of Controlling data definition by application name on page 220 for more information about registering package collections and plans in the ART. 4. In the ORT, register all objects that are exceptions to the subsystem data definition control that you defined in the ART. You must supply values for the QUALIFIER, NAME, TYPE, APPLMATCHREQ, APPLIDENT, and
Chapter 10. Controlling access through a closed application
221
APPLIDENTTYPE columns of the ORT. You can enter information in other columns of the ORT for your own use as indicated in Table 61 on page 217. Example: Suppose that you want almost all of the data definition language in your subsystem to be issued only through an application plan (PLANA) and a package collection (PACKB). Table 63 shows the entries that you need in your ART.
Table 63. Table DSN_REGISTER_APPL for total subsystem control with exceptions APPLIDENT PLANA PACKB APPLIDENTTYPE P C DEFAULTAPPL Y Y
However, you also want the following specific exceptions: v Object KIM.VIEW1 can be created, altered, or dropped by the application plan PLANC. v Object BOB.ALIAS can be created, altered, or dropped only by the package collection PACKD. v Object FENG.TABLE2 can be created, altered, or dropped by any plan or package collection. v Objects with names that begin with SPIFFY.MSTR and exactly one following character can be created, altered, or dropped by any plan that matches the name pattern TRULY%. For example, the plan TRULYJKL can create, alter, or drop the object SPIFFY.MSTRA. Table 64 shows the entries that are needed to register these exceptions in the ORT.
Table 64. Table DSN_REGISTER_OBJT for subsystem control with exceptions QUALIFIER NAME KIM BOB FENG SPIFFY VIEW1 ALIAS TABLE2 MSTR_ TYPE C C C C APPLMATCHREQ Y Y N Y TRULY% P APPLIDENT PLANC PACKD APPLIDENTTYPE P C
You can register objects in the ORT individually, or you can register sets of objects. For information about registering sets of objects, see Registering sets of objects on page 225.
222
Administration Guide
When you specify NO, you allow unregistered applications to use data definition statements on some objects. 2. On the DSNTIPZ installation panel, fill in option 4 as follows:
4 UNREGISTERED DDL DEFAULT ===> REJECT
When you specify REJECT for option 4, you totally restrict the use of data definition statements for objects that are not registered in the ORT. Therefore, no application can use data definition language for any unregistered object. 3. In the ORT, register all of the objects in the subsystem, and enter Y in the APPLMATCHREQ column. You must supply values for the QUALIFIER, NAME, TYPE, APPLMATCHREQ, APPLIDENT, and APPLIDENTTYPE columns of the ORT. You can enter information in other columns of the ORT for your own use as indicated in Table 61 on page 217. 4. In the ART, register any plan or package collection that can use a set of objects that you register in the ORT with an incomplete name. Enter the value Y in the QUALIFIEROK column. These plans or package collections can use data definition language on sets of objects regardless of whether a set of objects has a value of Y in the APPLMATCHREQ column. Example: Table 65 on page 224 shows entries in the ORT for a DB2 subsystem that contains the following objects that are controlled by object name: v Two storage groups (STOG1 and STOG2) and a database (DATB1) that are not controlled by a specific application. These objects can be created, altered, or dropped by a user with the appropriate authority by using any application, such as SPUFI or QMF. v Two table spaces (TBSP1 and TBSP2) that are not controlled by a specific application. Their names are qualified by the name of the database in which they reside (DATB1). v Three objects (OBJ1, OBJ2, and OBJ3) whose names are qualified by the authorization IDs of their owners. Those objects might be tables, views, indexes, synonyms, or aliases. Data definition statements for OBJ1 and OBJ2 can be issued only through the application plan named PLANX. Data definition statements for OBJ3 can be issued only through the package collection named PACKX. v Objects that match the qualifier pattern E%D and the name OBJ4 can be created, altered, or deleted by application plan SPUFI. For example, the objects EDWARD.OBJ4, ED.OBJ4, and EBHARD.OBJ4, can be created, altered, or deleted by application plan SPUFI. Entry E%D in the QUALIFIER column represents all three objects. v Objects with names that begin with TRULY.MY_, where the underscore character is actually part of the name. Assuming that you specify # as the escape character, all of the objects with this name pattern can be created, altered, or dropped only by plans with names that begin with TRULY. Assume the following installation option:
3 REQUIRE FULL NAMES ===> YES
Entries in Table 65 on page 224 do not specify incomplete names. Hence, objects that are not represented in the table cannot be created in the subsystem, except by an ID with installation SYSADM authority.
223
Table 65. Table DSN_REGISTER_OBJT for total control by object QUALIFIER NAME STOG1 STOG2 DATB1 DATB1 DATB1 KIM FENG QUENTIN E%D TRULY TBSP1 TBSP2 OBJ1 OBJ2 OBJ3 OBJ4 MY#_% TYPE S S D T T C C C C C APPLMATCHREQ N N N N N Y Y Y Y Y PLANX PLANX PACKX SPUFI TRULY% P P C P P APPLIDENT APPLIDENTTYPE
You can register objects in the ORT individually, or you can register sets of objects. For information about registering sets of objects, see Registering sets of objects on page 225.
When you specify NO, you allow unregistered applications to use data definition statements on some objects. 2. On the DSNTIPZ installation panel, fill in option 4 as follows:
4 UNREGISTERED DDL DEFAULT ===> ACCEPT
This option does not restrict the use of data definition statements for objects that are not registered in the ORT. Therefore, any application can use data definition language for any unregistered object. 3. Register all controlled objects in the ORT. Use a name and qualifier to identify a single object. Use only one part of a two-part name to identify a set of objects that share just that part of the name. For each controlled object, use APPLMATCHREQ = Y. Enter the name of the plan or package collection that controls the object in the APPLIDENT column. 4. For each set of controlled objects (identified by only a simple name in the ORT), register the controlling application in the ART. You must supply values for the APPLIDENT, APPLIDENTTYPE, and QUALIFIEROK columns of the ART. Example: The following two tables assume that the installation option REQUIRE FULL NAMES is set to NO, as described in Registering sets of objects on page 225. Table 66 on page 225 shows entries in the ORT for the following controlled objects:
224
Administration Guide
v The objects KIM.OBJ1, FENG.OBJ2, QUENTIN.OBJ3, and EDWARD.OBJ4, all of which are controlled by PLANX or PACKX, as described under Controlling data definition by object name on page 222. DB2 cannot interpret the object names as incomplete names because the objects that control them, PLANX and PACKX, are registered in Table 67 with QUALIFIEROK=N. v Two sets of objects, *.TABA and *.TABB, which are controlled by PLANA and PACKB, respectively.
Table 66. Table DSN_REGISTER_OBJT for object control with exceptions QUALIFIER NAME KIM FENG QUENTIN EDWARD OBJ1 OBJ2 OBJ3 OBJ4 TABA TABB TYPE C C C C C C APPLMATCHREQ Y Y Y Y Y Y APPLIDENT PLANX PLANX PACKX PACKX PLANA PACKB APPLIDENTTYPE P P C C P C
In this situation, with the combination of installation options shown previously, any application can use data definition language for objects that are not covered by entries in the ORT. For example, if HOWARD has the CREATETAB privilege, HOWARD can create the table HOWARD.TABLE10 through any application. You can register objects in the ORT individually, or you can register sets of objects. For information about registering sets of objects, see Registering sets of objects.
The default value YES requires you to use both parts of the name for each registered object. If you specify the value NO, an incomplete name in the ORT
Chapter 10. Controlling access through a closed application
225
represents a set of objects that all share the same value for one part of a two-part name. Objects that are represented by incomplete names in the ORT require an authorizing entry in the ART. Example: If you specify NO for option 3, you can include entries with incomplete names in the ORT. Table 68 shows entries in the ORT for the following objects: v Two sets of objects, *.TABA and *.TABB, which are controlled by PLANX and PACKY, respectively. Only PLANX can create, alter, or drop any object whose name is *.TABA. Only PACKY can create, alter, or drop any object whose name is *.TABB. PLANX and PACKY must also be registered in the ART with QUALIFIEROK set to Y, as shown in Table 69. That setting allows the applications to use sets of objects that are registered in the ORT with an incomplete name. v Tables, views, indexes, or aliases with names like SYSADM.*. v Table spaces with names like DBSYSADM.*; that is, table spaces in database DBSYSADM. v Tables with names like USER1.* and tables with names like *.TABLEX.
Table 68. Table DSN_REGISTER_OBJT for objects with incomplete names QUALIFIER NAME TABA TABB SYSADM DBSYSADM USER1 TABLEX TYPE C C C T C APPLMATCHREQ Y Y N N N APPLIDENT APPLIDENTTYPE PLANX PACKY P C
ART entries for objects with incomplete names in the ORT: APPLMATCHREQ=N and objects SYSADM.*, DBSYSADM.*, USER1.*, and *.TABLEX can be created, altered, or dropped by any package collection or application plan. However, the collection or plan that creates, alters, or drops such an object must be registered in the ART with QUALIFIEROK=Y to allow it to use incomplete object names. Table 69 shows that PLANA and PACKB are registered in the ART to use sets of objects that are registered in the ORT with incomplete names.
Table 69. Table DSN_REGISTER_APPL for plans that use sets of objects APPLIDENT PLANA PACKB APPLIDENTTYPE P C DEFAULTAPPL N N QUALIFIEROK Y Y
226
Administration Guide
CREATE UNIQUE INDEX DSNRGCOL.DSN_REGISTER_APPLI ON DSNRGCOL.DSN_REGISTER_APPL (APPLIDENT, APPLIDENTTYPE, DEFAULTAPPL DESC, QUALIFIEROK DESC) CLUSTER;
You can alter these CREATE statements in the following ways: v Add columns to the ends of the tables v Assign an auditing status v Choose buffer pool or storage options for indexes v Declare table check constraints to limit the types of entries that are allowed
227
v Indexes of the registration tables that are defined during installation v Table spaces that contain the registration tables that are defined during installation v The database that contains the registration tables that are defined during installation
If you want to use a table space with a different name or different attributes, you can modify job DSNTIJSG before installing DB2. Alternatively, you can drop the table space and re-create it, the two tables, and their indexes.
Adding columns
You can add columns to either registration table for your own use, by using the ALTER TABLE statement. If you add columns, the additional columns must come at the end of the table, after existing columns. Recommendation: Use a special character, such as the plus sign (+), in your column names to avoid possible conflict. If IBM adds columns to the ART or the ORT in future releases, the column names will contain only letters and numbers.
228
Administration Guide
v If the ID is the current SQL ID, the primary ID, or any secondary ID of the executing process, the ID can bypass data definition control through a dynamic ALTER or DROP statement.
229
230
Administration Guide
231
Processing connections
A connection request makes a new connection to DB2; it does not reuse an application plan that is already allocated. Therefore, an essential step in processing the request is to check that the ID is authorized to use DB2 resources, as shown in Figure 18.
Step 1: Obtain primary ID
232
Administration Guide
2. RACF is called through the z/OS system authorization facility (SAF) to check whether the ID that is associated with the address space is authorized to use the following resources: v The DB2 resource class (CLASS=DSNR) v The DB2 subsystem (SUBSYS=ssnm) v The requested connection type For instructions on authorizing the use of these resources, see Permitting RACF access on page 267. The SAF return code (RC) from the invocation determines the next step, as follows: v If RC > 4, RACF determined that the RACF user ID is not valid or does not have the necessary authorization to access the resource name. DB2 rejects the request for a connection. v If RC = 4, the RACF return code is checked. If RACF return code value is equal to 4, the resource name is not defined to RACF and DB2 rejects the request with reason code X'00F30013'. For instructions on defining the resource name, see Defining DB2 resources to RACF on page 265. If RACF return code value is not equal to 4, RACF is not active. DB2 continues with the next step, but the connection request and the user are not verified. v If RC = 0, RACF is active and has verified the RACF user ID; DB2 continues with the next step. 3. If RACF is active and has verified the RACF user ID, DB2 runs the connection exit routine. To use DB2 secondary IDs, you must replace the exit routine. See Supplying secondary IDs for connection requests on page 234. If you do not want to use secondary IDs, do nothing. The IBM-supplied default connection exit routine continues the connection processing. The process has the following effects: v The DB2 primary authorization ID is set based on the following rules: If a value for the initial primary authorization ID exists, the value becomes the DB2 primary ID. If no value exists (the value is blank), the primary ID is set by default, as shown in Table 71 on page 234.
Chapter 11. Controlling access to a DB2 subsystem
233
Table 71. Sources of default authorization identifiers Source TSO BATCH Started task, or batch job with no USER parameter Remote request Default primary authorization ID TSO logon ID USER parameter on JOB statement Default authorization ID set when DB2 was installed (UNKNOWN AUTHID on installation panel DSNTIPP) None. The user ID is required and is provided by the DRDA requester.
v The SQL ID is set equal to the primary ID. v No secondary IDs exist. If you want to use secondary IDs, see the description in Supplying secondary IDs for connection requests. Of course, you can also replace the exit routine with one that provides different default values for the DB2 primary ID. If you have written such a routine for an earlier release of DB2, it will probably work for this release with no change.
Installation job DSNTIJEX replaces the default connection exit routine with the sample connection exit routine; for more information, see Part 2 of DB2 Installation Guide. The sample connection exit routine has the following effects: v The sample connection exit routine sets the DB2 primary ID in the same way that the default routine sets the DB2 primary ID, and according to the following rules: If the initial primary ID is not blank, the initial ID becomes the DB2 primary ID. If the initial primary ID is blank, the sample routine provides the same default value as does the default routine.
234
Administration Guide
If the sample routine cannot find a nonblank primary ID, DB2 uses the default ID (UNKNOWN AUTHID) from the DSNTIPP installation panel. In this case, no secondary IDs are supplied. v The sample connection exit routine sets the SQL ID based on the following criteria: The routine sets the SQL ID to the TSO data set name prefix in the TSO user profile table if the following conditions are true: - The connection request is from a TSO-managed address space, including the call attachment facility, the TSO foreground, and the TSO background. - The TSO data set name prefix is equal to the primary ID or one of the secondary IDs. In all other cases, the routine sets the SQL ID equal to the primary ID. v The secondary authorization IDs depend on RACF options: If RACF is not active, no secondary IDs exist. If RACF is active but its list of groups option is not active, one secondary ID exists (the default connected group name) if the attachment facility supplied the default connected group name. If RACF is active and the list of groups option is active, the routine sets the list of DB2 secondary IDs to the list of group names to which the RACF user ID is connected. Those RACF user IDs that are in REVOKE status do not become DB2 secondary IDs. The maximum number of groups is 1012. The list of group names is obtained from RACF and includes the default connected group name. If the default connection exit routine and the sample connection exit routine do not provide the flexibility and features that your subsystem requires, you can write your own exit routine. For instructions on writing your own exit routine, see Appendix B, Writing exit routines, on page 1055.
235
For instructions on changing the sample sign-on exit routine, see Sample connection and sign-on routines on page 1056.
Processing sign-ons
For requests from IMS dependent regions, CICS transaction subtasks, or RRS connections, the initial primary ID is not obtained until just before allocating a plan for a transaction. A new sign-on request can run the same plan without deallocating the plan and reallocating it. Nevertheless, the new sign-on request can change the primary ID. Unlike connection processing, sign-on processing does not check the RACF user ID of the address space. The steps in processing sign-ons are shown in Figure 19.
Step 1: Obtain the primary ID
| | | | | | |
236
Administration Guide
that is associated with the TCB and ACEEGRPN is not null, DB2 uses ACEEGRPN to establish secondary authorization IDs. With AUTH SIGNON, an APF-authorized program can pass a primary authorization ID for the connection. If a primary authorization ID is passed, AUTH SIGNON also uses the value that is passed in the secondary authorization ID parameter to establish secondary authorization IDs. If the primary authorization ID is not passed, but a valid ACEE is passed, AUTH SIGNON uses the value in ACEEUSRI for the primary authorization ID if ACEEUSRL is not 0. If ACEEUSRI is used for the primary authorization ID, AUTH SIGNON uses the value in ACEEGRPN as the secondary authorization ID if ACEEGRPL is not 0. For CONTEXT SIGNON, the primary authorization ID is retrieved from data that is associatedail with the current RRS context using the context_key, which is supplied as input. CONTEXT SIGNON uses the CTXSDTA and CTXRDTA functions of RRS context services. An authorized function must use CTXSDTA to store a primary authorization ID prior to invoking CONTEXT SIGNON. Optionally, CTXSDTA can be used to store the address of an ACEE in the context data that has a context_key that was supplied as input to CONTEXT SIGNON. DB2 uses CTXRDTA to retrieve context data. If an ACEE address is passed, CONTEXT SIGNON uses the value in ACEEGRPN as the secondary authorization ID if ACEEGRPL is not 0. For more information, see Part 6 of DB2 Application Programming and SQL Guide. 2. DB2 runs the sign-on exit routine. User action: To use DB2 secondary IDs, you must replace the exit routine. If you do not want to use secondary IDs, do nothing. Sign-on processing is then continued by the IBM-supplied default sign-on exit routine, which has the following effects: v The initial primary authorization ID remains the primary ID. v The SQL ID is set equal to the primary ID. v No secondary IDs exist. You can replace the exit routine with one of your own, even if it has nothing to do with secondary IDs. If you do, remember that IMS and CICS recovery coordinators, their dependent regions, and RRSAF take the exit routine only if they have provided a user ID in the sign-on parameter list. If you do want to use secondary IDs, see the description that follows.
237
# # # #
v The SQL ID is made equal to the DB2 primary ID. v The secondary authorization IDs depend on RACF options: If RACF is not active, no secondary IDs exist. If RACF is active but its list of groups option is not active, one secondary ID exists; it is the name passed by CICS or by IMS. If RACF is active and you have selected the option for a list of groups, the routine sets the list of DB2 secondary IDs to the list of group names to which the RACF user ID is connected, up to a limit of 1012 groups. The list of group names includes the default connected group name.
238
Administration Guide
| |
v Encrypted user ID, encrypted password, encrypted new password, and encrypted security-sensitive data Authentication is performed based on DRDA protocols, which means that the authentication tokens are sent in DRDA security flows. If you use a requester other than DB2 UDB for z/OS, refer to that product's documentation.
# # #
239
Detecting authorization failures (EXTENDED SECURITY): If the DB2 server is installed with YES for the EXTENDED SECURITY field of installation panel DSNTIPR, detailed reason codes are returned to a DRDA client when a DDF connection request fails because of security errors. When using SNA protocols, the requester must have included support for extended security sense codes. One such product is DB2 Connect. If the proper requester support is present, the requester generates SQLCODE -30082 (SQLSTATE '08001') with a specific indication for the failure. Otherwise, a generic security failure code is returned.
240
Administration Guide
With this option, an incoming connection request is accepted if it includes any of the following authentication tokens: v User ID only v All authentication methods that option V supports If the USERNAMES column of SYSIBM.LUNAMES contains I or B, RACF is not invoked to validate incoming connection requests that contain only a user ID. ENCRYPTPSWDS CHAR(1) This column only applies to DB2 UDB for z/OS or DB2 UDB for z/OS partners when passwords are used as authentication tokens. It indicates whether passwords received from and sent to the corresponding LUNAME are encrypted: Y Yes, passwords are encrypted. For outbound requests, the encrypted password is extracted from RACF and sent to the server. For inbound requests, the password is treated as if it is encrypted. No, passwords are not encrypted. This is the default; any character other than Y is treated as N. Specify N for CONNECT statements that contain a USER parameter.
Recommendation: When you connect to a DB2 UDB for z/OS partner that is at Version 5 or a subsequent release, use RACF PassTickets (SECURITY_OUT='R') instead of using passwords. USERNAMES CHAR(1) This column indicates whether an ID accompanying a remote request, sent from or to the corresponding LUNAME, is subject to translation and come from checking. When you specify I, O, or B, use the SYSIBM.USERNAMES table to perform the translation. I O B blank An inbound ID is subject to translation. An outbound ID, sent to the corresponding LUNAME, is subject to translation. Both inbound and outbound IDs are subject to translation. No IDs are translated.
The field should contain only I or O. Any other character, including blank, causes the row to be ignored. | AUTHID VARCHAR(128) An authorization ID that is permitted and perhaps translated. If blank, any authorization ID is permitted with the corresponding LINKNAME; all authorization IDs are translated in the same way. Outbound translation is not performed on CONNECT statements that contain an authorization ID for the value of the USER parameter.
241
LINKNAME CHAR(8) Identifies the VTAM or TCP/IP network locations that are associated with this row. A blank value in this column indicates that this name translation rule applies to any TCP/IP or SNA partner. If you specify a nonblank value for this column, one or both of the following situations must be true: v A row exists in table SYSIBM.LUNAMES that has an LUNAME value that matches the LINKNAME value that appears in this column. v A row exists in table SYSIBM.IPNAMES that has a LINKNAME value that matches the LINKNAME value that appears in this column. | NEWAUTHID VARCHAR(128) The translated authorization ID. If blank, no translation occurs.
Verifying a partner LU
This check is carried out by RACF and VTAM, to check the identity of an LU sending a request to your DB2. Recommendation: Specify partner-LU verification, which requires the following steps: 1. Code VERIFY=REQUIRED on the VTAM APPL statement, when you define your DB2 to VTAM. The APPL statement is described in detail in Part 3 of DB2 Installation Guide. 2. Establish a RACF profile for each LU from which you permit a request. For the steps required, see Enable partner-LU verification on page 267.
242
Administration Guide
remote attachment request is defined by Systems Network Architecture and LU 6.2 protocols; specifically, it is an SNA Function Management Header 5.) Thissection tells what security checks you can impose on remote attachment requests. Conversation-level security: This section assumes that you have defined your DB2 to VTAM with the conversation-level security set to already verified. (To do that, you coded SECACPT=ALREADYV on the VTAM APPL statement, as described in Part 3 of DB2 Installation Guide. That value provides more options than does conversation (SECACPT=CONV), which is not recommend. Steps, tools, and decisions: The steps an attachment request goes through before acceptance allow much flexibility in choosing security checks. Scan Figure 20 on page 245 to see what is possible. The primary tools for controlling remote attachment requests are entries in tables SYSIBM.LUNAMES and SYSIBM.USERNAMES in the communications database. You need a row in SYSIBM.LUNAMES for each system that sends attachment requests, a dummy row that allows any system to send attachment requests, or both. You might need rows in SYSIBM.USERNAMES to permit requests from specific IDs or specific LUNAMES, or to provide translations for permitted IDs. When planning to control remote requests, answer the questions posed by the following topics for each remote LU that can send a request. v Do you permit access? v Do you manage inbound IDs through DB2 or RACF? v Do you trust the partner LU? on page 244 v If you use passwords, are they encrypted? on page 244 v If you use Kerberos, are users authenticated? on page 244 v Do you translate inbound IDs? on page 247 v How do you associate inbound IDs with secondary IDs? on page 249
243
To manage incoming IDs through RACF, leave USERNAMES blank for that LU (or leave the O unchanged). Requests from that LU go through connection processing, and its IDs are not subject to translation.
| | | | |
244
Administration Guide
Activity at the DB2 server Remote attach request using SNA protocols ID and authentication check Step 1: Is an authentication token present? Yes No Step 2: Test the value of SECURITY_IN. =A =V Token required; reject request.
Check ID for sign-ons Step 7: Is a password present? No Not authorized; reject request. Yes Step 8: Verify ID by RACF.
Check ID for connections Check USERNAMES table Step 4: Verify ID by RACF. Not authorized; reject request. Step 9: Seek a translation row in USERNAMES. Found Connection processing Step 5: Verify by RACF that the ID can access DB2. Not authorized; reject request. Request accepted: continue Sign-on processing Step 6: Run the connection exit routine (DSN3@ATH). Step 11: Run the sign-on exit routine (DSN3@SGN). Step 10: Obtain the primary ID. Not found; reject request.
Figure 20. Steps in accepting a remote attachment request from requester that is using SNA
Details of remote attachment request processing: 1. If the remote request has no authentication token, DB2 checks the security acceptance option in the SECURITY_IN column of table SYSIBM.LUNAMES. No password is sent or checked for the plan or package owner that is sent from a DB2 subsystem. 2. If the acceptance option is verify (SECURITY_IN = V), a security token is required to authenticate the user. DB2 rejects the request if the token missing. 3. If the USERNAMES column of SYSIBM.LUNAMES contains I or B, the authorization ID, and the plan or package owner that is sent by a DB2
245
subsystem, are subject to translation under control of the SYSIBM.USERNAMES table. If the request is allowed, it eventually goes through sign-on processing. If USERNAMES does not contain I or B, the authorization ID is not translated. 4. DB2 calls RACF by the RACROUTE macro with REQUEST=VERIFY to check the ID. DB2 uses the PASSCHK=NO option if no password is specified and ENCRYPT=YES if the ENCRYPTPSWDS column of SYSIBM.LUNAMES contains Y. If the ID, password, or PassTicket cannot be verified, DB2 rejects the request. In addition, depending on your RACF environment, the following RACF checks may also be performed: v If the RACF APPL class is active, RACF verifies that the ID has been given access to the DB2 APPL. The APPL resource that is checked is the LU name that the requester used when the attachment request was issued. This is either the local DB2 LU name or the generic LU name. v If the RACF APPCPORT class is active, RACF verifies that the ID is authorized to access z/OS from the Port of Entry (POE). The POE that RACF uses in the verify call is the requesting LU name. 5. The remote request is now treated like a local connection request with a DIST environment for the DSNR resource class; for details, see Processing connections on page 232. DB2 calls RACF by the RACROUTE macro with REQUEST=AUTH, to check whether the authorization ID is allowed to use DB2 resources that are defined to RACF. The RACROUTE macro call also verifies that the user is authorized to use DB2 resources from the requesting system, known as the port of entry (POE); for details, see Allowing access from remote requesters on page 272. 6. DB2 invokes the connection exit routine. The parameter list that is passed to the routine describes where a remote request originated. 7. If no password exists, RACF is not called. The ID is checked in SYSIBM.USERNAMES. 8. If a password exists, DB2 calls RACF through the RACROUTE macro with REQUEST=VERIFY to verify that the ID is known with the password. ENCRYPT=YES is used if the ENCRYPTPSWDS column of SYSIBM.LUNAMES contains Y. If DB2 cannot verify the ID or password, the request is rejected. 9. DB2 searches SYSIBM.USERNAMES for a row that indicates how to translate the ID. The need for a row that applies to a particular ID and sending location imposes a come-from check on the ID: If no such row exists, DB2 rejects the request. 10. If an appropriate row is found, DB2 translates the ID as follows: v If a nonblank value of NEWAUTHID exists in the row, that value becomes the primary authorization ID. v If NEWAUTHID is blank, the primary authorization ID remains unchanged. 11. The remote request is now treated like a local sign-on request; for details, see Processing sign-ons on page 236. DB2 invokes the sign-on exit routine. The parameter list that is passed to the routine describes where a remote request originated. For details, see Connection routines and sign-on routines on page 1055. 12. The remote request now has a primary authorization ID, possibly one or more secondary IDs, and an SQL ID. A request from a remote DB2 is also known by a plan or package owner. Privileges and authorities that are granted to those IDs at the DB2 server govern the actions that the request can take.
246
Administration Guide
247
Table 73. Your SYSIBM.USERNAMES table (continued). (Row numbers are added for reference.) Row 5 TYPE I AUTHID BETTY LINKNAME blank NEWAUTHID blank
DB2 searches SYSIBM.USERNAMES to determine how to translate for each of the requests that are listed in Table 74.
Table 74. How DB2 translates inbound authorization ids Request How DB2 translates request
ALBERT requests from DB2 searches for an entry for AUTHID=ALBERT and LINKNAME=LUDALLAS. DB2 finds LUDALLAS one in row 4, so the request is accepted. The value of NEWAUTHID in that row is blank, so ALBERT is left unchanged. BETTY requests from LUDALLAS DB2 searches for an entry for AUTHID=BETTY and LINKNAME=LUDALLAS; none exists. DB2 then searches for AUTHID=BETTY and LINKNAME=blank. It finds that entry in row 5, so the request is accepted. The value of NEWAUTHID in that row is blank, so BETTY is left unchanged. DB2 searches for AUTHID=CHARLES and LINKNAME=LUDALLAS; no such entry exists. DB2 then searches for AUTHID=CHARLES and LINKNAME=blank. The search ends at row 3; the request is accepted. The value of NEWAUTHID in that row is CHUCK, so CHARLES is translated to CHUCK.
ALBERT requests from DB2 searches for AUTHID=ALBERT and LINKNAME=LUSNFRAN; no such entry exists. LUSNFRAN DB2 then searches for AUTHID=ALBERT and LINKNAME=blank; again no entry exists. Finally, DB2 searches for AUTHID=blank and LINKNAME=LUSNFRAN, finds that entry in row 1, and the request is accepted. The value of NEWAUTHID in that row is blank, so ALBERT is left unchanged. BETTY requests from LUSNFRAN CHARLES requests from LUSNFRAN DB2 finds row 2, and BETTY is translated to ELIZA. DB2 finds row 3 before row 1; CHARLES is translated to CHUCK.
WILBUR requests from No provision is made for WILBUR, but row 1 of the SYSIBM.USERNAMES table allows any LUSNFRAN ID to make a request from LUSNFRAN and to pass without translation. The acceptance level for LUSNFRAN is already verified, so WILBUR can pass without a password check by RACF. After accessing DB2, WILBUR can use only the privileges that are granted to WILBUR and to PUBLIC (for DRDA access) or to PUBLIC AT ALL LOCATIONS (for DB2 private-protocol access). WILBUR requests from Because the acceptance level for LUDALLAS is verify as recorded in the LUDALLAS SYSIBM.LUNAMES table, WILBUR must be known to the local RACF. DB2 searches in succession for one of the combinations WILBUR/LUDALLAS, WILBUR/blank, or blank/LUDALLAS. None of those is in the table, so the request is rejected. The absence of a row permitting WILBUR to request from LUDALLAS imposes a come-from check: WILBUR can attach from some locations (LUSNFRAN), and some IDs (ALBERT, BETTY, and CHARLES) can attach from LUDALLAS, but WILBUR cannot attach if coming from LUDALLAS.
248
Administration Guide
# # # # # # #
Encryption considerations: If incoming authorization IDs are managed through DB2 and if the ICSF is installed and properly configured, you can use the DSNLEUSR stored procedure to encrypt translated authorization IDs and store them in the NEWAUTHID column of the SYSIBM.USERNAMES table. DB2 decrypts the translated authorization IDs during connection processing. For more information about the DSNLEUSR stored procedure, see Appendix J, DB2-supplied stored procedures, on page 1261.
249
v DRDA password encryption support. DB2 UDB for z/OS as a server supports DRDA encrypted passwords and encrypted user IDs with encrypted passwords. See Sending encrypted passwords from workstation clients on page 263 for more information. If you use Kerberos, are users authenticated? If your distributed environment uses Kerberos to manage users and perform user authentication, DB2 UDB for z/OS can use Kerberos security services to authenticate remote users. See Establishing Kerberos authentication through RACF on page 278. Do you translate inbound IDs? Inbound IDs are not translated when you use TCP/IP. How do you associate inbound IDs with secondary IDs? To associate an inbound ID with secondary IDs, modify the default connection exit routine (DSN3@ATH). TCP/IP requests do not use the sign-on exit routine.
250
Administration Guide
Activity at the DB2 server TCP/IP request from remote user Verify remote connections Step 1: Is authentication information present? Yes Step 2: Does the serving subsystem accept remote requests without verification?
No
TCPALVER=NO
Reject request.
TCPALVER=YES
Check ID for connections Step 3: Verify identity by RACF or Kerberos. Not authorized; reject request.
Connection processing Step 4: Verify by RACF that the ID can access DB2. Not authorized; reject request.
Details of steps: These notes explain the steps shown in Figure 21. 1. DB2 checks to see if an authentication token (RACF encrypted password, RACF PassTicket, DRDA encrypted password, or Kerberos ticket) accompanies the remote request. 2. If no authentication token is supplied, DB2 checks the TCPALVER subsystem parameter to see if DB2 accepts IDs without authentication information. If TCPALVER=NO, authentication information must accompany all requests, and DB2 rejects the request. If TCPALVER=YES, DB2 accepts the request without authentication. 3. The identity is a RACF ID that is authenticated by RACF if a password or PassTicket is provided, or the identity is a Kerberos principal that is validated by Kerberos Security Server, if a Kerberos ticket is provided. Ensure that the ID is defined to RACF in all cases. When Kerberos tickets are used, the RACF ID is derived from the Kerberos principal identity. To use Kerberos tickets, ensure that you map Kerberos principal names with RACF IDs, as described in Establishing Kerberos authentication through RACF on page 278. In addition, depending on your RACF environment, the following RACF checks may also be performed:
Chapter 11. Controlling access to a DB2 subsystem
251
| | | | | | | | | | | | | | | |
v If the RACF APPL class is active, RACF verifies that the ID has access to the DB2 APPL. The APPL resource that is checked is the LU name that the requester used when the attachment request was issued. This is either the local DB2 LU name or the generic LU name. v If the RACF APPCPORT class is active, RACF verifies that the ID is authorized to access z/OS from the port of entry (POE). The POE that RACF uses in the RACROUTE VERIFY call depends on whether all the following conditions are true: The current operating system is z/OS V1.5 or later The TCP/IP Network Access Control is configured The RACF SERVAUTH class is active If all these conditions are true, RACF uses the remote clients POE security zone name that is defined in the TCP/IP Network Access Control file. If one or more of these conditions is not true, RACF uses the literal string TCPIP. If this is a request to change a password, the password is changed. For more information about the RACF SERVAUTH class, see z/OS V1.5 Security Server RACF Security Administrators Guide. For more information about TCP/IP Network Access Control, seez/OS Communications Server: IP Configuration Guide and Allowing access from remote requesters on page 272. 4. The remote request is now treated like a local connection request (using the DIST environment for the DSNR resource class). DB2 calls RACF to check the IDs authorization against the ssnm.DIST resource. 5. DB2 invokes the connection exit routine. The parameter list that is passed to the routine describes where the remote request originated. 6. The remote request has a primary authorization ID, possibly one or more secondary IDs, and an SQL ID. (The SQL ID cannot be translated.) The plan or package owner ID also accompanies the request. Privileges and authorities that are granted to those IDs at the DB2 server govern the actions that the request can take.
252
Administration Guide
| |
ENCRYPTPSWDS CHAR(1) Indicates whether passwords received from and sent to the corresponding LUNAME are encrypted. This column only applies to DB2 UDB for z/OS and DB2 UDB for z/OS partners when passwords are used as authentication tokens. Y Yes, passwords are encrypted. For outbound requests, the encrypted password is extracted from RACF and sent to the server. For inbound requests, the password is treated as encrypted.
Chapter 11. Controlling access to a DB2 subsystem
253
No, passwords are not encrypted. This is the default; any character but Y is treated as N.
Recommendation: When you connect to a DB2 UDB for z/OS partner that is at Version 5 or a subsequent release, use RACF PassTickets (SECURITY_OUT=R) instead of encrypting passwords. USERNAMES CHAR(1) Indicates whether an ID accompanying a remote attachment request, which is received from or sent to the corresponding LUNAME, is subject to translation and come from checking. When you specify I, O, or B, use the SYSIBM.USERNAMES table to perform the translation. I O B An inbound ID is subject to translation. An outbound ID, sent to the corresponding LUNAME, is subject to translation. Both inbound and outbound IDs are subject to translation.
| | | |
254
Administration Guide
| | | | | | | | | | | | | | | P E
ID that is used for an outbound request is either the DB2 users authorization ID or a translated ID, depending on the USERNAMES column. This option indicates that the user ID and the security-sensitive data are to be encrypted. If you do not require encryption, see option A. The letter E signifies the security option of user ID, password, and security-sensitive data encryption. Outbound connection requests contain an authorization ID and a password. The password is obtained from the SYSIBM.USERNAMES table. The USERNAMES column must specify O. This option indicates that the user ID, password, and security-sensitive data are to be encrypted. If you do not require security-sensitive data encryption, see option P. The letter P signifies the password security option. Outbound connection requests contain an authorization ID and a password. The password is obtained from the SYSIBM.USERNAMES table. If you specify P, the USERNAMES column must specify O. If you specify P and the server supports encryption, the user ID and the password are encrypted. If the server does not support encryption, the user ID and the password are sent to the partner in clear text. If you also need to encrypt security-sensitive data, see option E. USERNAMES CHAR(1) This column indicates whether an outbound request translates the authorization ID. When you specify O, use the SYSIBM.USERNAMES table to perform the translation. O An outbound ID, sent to the corresponding LUNAME, is subject to translation.
| | | | |
The field should contain only I or O. Any other character, including blank, causes the row to be ignored. | AUTHID VARCHAR(128) An authorization ID that is permitted and perhaps translated. If blank, any authorization ID is permitted with the corresponding LINKNAME, and all authorization IDs are translated in the same way. LINKNAME CHAR(8) Identifies the VTAM or TCP/IP network locations that are associated with this row. A blank value in this column indicates that this name translation rule applies to any TCP/IP or SNA partner.
255
If you specify a nonblank value for this column, one or both of the following situations must be true: v A row exists in table SYSIBM.LUNAMES that has an LUNAME value that matches the LINKNAME value that appears in this column. v A row exists in table SYSIBM.IPNAMES that has a LINKNAME value that matches the LINKNAME value that appears in this column. | NEWAUTHID VARCHAR(128) The translated authorization ID. If blank, no translation occurs. PASSWORD CHAR(8) A password that is sent with outbound requests. This password is not provided by RACF and cannot be encrypted.
256
Administration Guide
name (TPN) that will allocate the conversation. A length of zero for the column indicates the default TPN. For DRDA conversations, this is the DRDA default, which is X'07F6C4C2'. For DB2 private protocol conversations, this column is not used. For an SQL/DS server, TPN should contain the resource ID of the SQL/DS machine. | DBALIAS(128) This name is used to access a remote database server. If DBALIAS is blank, the location name is used to access the remote database server. This column does not change the name of any database objects sent to the remote site that contains the location qualifier.
An SQL query, using DB2 private-protocol or The plan owner DRDA-protocol access A remote BIND, COPY, or REBIND PACKAGE command The package owner
For DRDA, if you use the SYSIBM.USERNAMES table that contains the plan owner ID, the plan owner ID is sent to the z/OS server as part of its accounting data. If the connection is to a remote non-DB2 for z/OS server using DRDA protocol and if the outbound translation is specified, a row for the plan owner in the USERNAMES table is optional.
257
Step 2: Is outbound translation specified? Yes Translate remote primary ID using NEWAUT HID column of SYSIBM.USERNAMES. No Remote primary ID is the same as the local primary ID.
Details of steps in sending a request from DB2: These notes explain the steps in Figure 22. 1. The DB2 subsystem that sends the request checks whether the primary authorization ID has the privilege to execute the plan or package. DB2 determines which value in the LINKNAME column of the SYSIBM.LOCATIONS table matches either the LUNAME column in the SYSIBM.LUNAMES table or the LINKNAME column in the SYSIBM.IPNAMES table. This check determines whether SNA or TCP/IP protocols are used to carry the DRDA request. (Statements that use DB2 private protocol, not DRDA, always use SNA.) 2. When a plan is executed, the authorization ID of the plan owner is sent with the primary authorization ID. When a package is bound, the authorization ID of the package owner is sent with the primary authorization ID. If the
258
Administration Guide
USERNAMES column of the SYSIBM.LUNAMES table contains O or B, or if the USERNAMES column of the SYSIBM.IPNAMES table contains O, both IDs are subject to translation under control of the SYSIBM.USERNAMES table. Ensure that these IDs are included in SYSIBM.USERNAMES, or SQLCODE -904 is issued. DB2 translates the ID as follows: v If a nonblank value of NEWAUTHID is in the row, that value becomes the new ID. v If NEWAUTHID is blank, the ID is not changed. If the SYSIBM.USERNAMES table does not contain a new authorization ID to which the primary authorization ID is translated, the request is rejected with SQLCODE -904. If the USERNAMES column does not contain O or B, the IDs are not translated. 3. SECURITY_OUT is checked for outbound security options, as follows. Figure 23 on page 260 illustrates this step. A Already verified. No password is sent with the authorization ID. This option is valid only if the server accepts already verified requests. v For SNA, the server must have specified A in the SECURITY_IN column of SYSIBM.LUNAMES. v For TCP/IP, the server must have specified YES in the TCP/IP ALREADY VERIFIED field of installation panel DSNTIP5. RACF PassTicket. If the primary authorization ID was translated, that translated ID is sent with the PassTicket. See Sending RACF PassTickets on page 263 for information about setting up PassTickets. Password. The outbound request must be accompanied by a password: v If the requester is DB2 UDB for z/OS and uses SNA protocols, passwords can be encrypted if you specify Y in the ENCRYPTPSWDS column of SYSIBM.LUNAMES. If passwords are encrypted, the password is obtained from RACF. If passwords are not encrypted, the password is obtained from the PASSWORD column of SYSIBM.USERNAMES. v If the requester uses TCP/IP protocols, the password is obtained from the PASSWORD column of SYSIBM.USERNAMES. If the Integrated Cryptographic Service Facility is enabled and properly configured and the server supports encryption, the password is encrypted. Recommendation: Use RACF PassTickets to avoid sending unencrypted passwords over the network. D User ID and security-sensitive data encryption. No password is sent with the authorization ID. If the Integrated Cryptographic Service Facility (ICSF) is enabled and properly configured and the server supports encryption, the authorization ID is encrypted before it is sent. If the ICSF is not enabled or properly configured, SQL return code 904 is returned. If the server does not support encryption, SQL return code 30082 is returned.
# # # # # # # # # # # # # # # # # # # # # # # # # #
User ID, password, and security-sensitive data encryption. If the ICSF is enabled and properly configured and the server supports encryption, the password is encrypted before it is sent. If the ICSF is not enabled or properly configured, SQL return code 904 is returned. If the server does not support encryption, SQL return code 30082 is returned. 4. Send the request. See Table 76 on page 257 to determine which IDs accompany the primary authorization ID. E
Chapter 11. Controlling access to a DB2 subsystem
259
A: No password is sent.
P: SNA or TCP/IP protocol? SNA Encrypt? Yes Get password from RACF. No TCP/IP Encrypt? No Yes Get password from SYSIBM.USERNAMES and encrypt with ICSF.
Step 2 D: ICSF enabled and server supports encryption? No Error - 904 or - 30082 Yes No password sent. Get authorization ID and encrypt with ICSF.
E: ICSF enabled and server supports encryption? No Error - 904 or - 30082 Yes Get password from SYSIBM.USERNAMES and encrypt with ICSF.
260
Administration Guide
1. Specify an O in the USERNAMES column of table SYSIBM.IPNAMES or SYSIBM.LUNAMES. 2. Use the NEWAUTHID column of SYSIBM.USERNAMES to specify the ID to which the outbound ID is translated. Example 1: Suppose that the remote system accepts from you only the IDs XXGALE, GROUP1, and HOMER. 1. Specify that outbound translation is in effect for the remote system LUXXX by specifying in SYSIBM.LUNAMES the values that are shown in Table 77.
Table 77. SYSIBM.LUNAMES to specify that outbound translation is in effect for the remote system LUXXX LUNAME LUXX USERNAMES O
If your row for LUXXX already has I for the USERNAMES column (because you translate inbound IDs that come from LUXXX), change I to B for both inbound and outbound translation. 2. Translate the ID GALE to XXGALE on all outbound requests to LUXXX by specifying in SYSIBM.USERNAMES the values that are shown in Table 78.
Table 78. Values in SYSIBM. USERNAMES to translate GALE to XXGALE on outbound requests to LUXXX TYPE O AUTHID GALE LINKNAME LUXX NEWAUTHID XXGALE PASSWORD GALEPASS
3. Translate EVAN and FRED to GROUP1 on all outbound requests to LUXXX by specifying in SYSIBM.USERNAMES the values that are shown in Table 79.
Table 79. Values in SYSIBM. USERNAMES to translate EVAN and FRED to GROUP1 on outbound requests to LUXXX TYPE O O AUTHID EVAN FRED LINKNAME LUXXX LUXXX NEWAUTHID GROUP1 GROUP1 PASSWORD GRP1PASS GRP1PASS
4. Do not translate the ID HOMER on outbound requests to LUXXX. (HOMER is assumed to be an ID on your DB2, and on LUXXX.) Specify in SYSIBM.USERNAMES the values that are shown in Table 80.
Table 80. Values in SYSIBM. USERNAMES to not translate HOMER on outbound requests to LUXXX TYPE O AUTHID HOMER LINKNAME LUXXX NEWAUTHID (blank) PASSWORD HOMERSPW
5. Reject any requests from BASIL to LUXXX before they are sent. To do that, leave SYSIBM.USERNAMES empty. If no row indicates what to do with the ID BASIL on an outbound request to LUXXX, the request is rejected. Example 2: If you send requests to another LU, such as LUYYY, you generally need another set of rows to indicate how your IDs are to be translated on outbound requests to LUYYY.
261
However, you can use a single row to specify a translation that is to be in effect on requests to all other LUs. For example, if HOMER is to be sent untranslated everywhere, and DOROTHY is to be translated to GROUP1 everywhere, specify in SYSIBM.USERNAMES the values that are shown in Table 81.
Table 81. Values in SYSIBM. USERNAMES to not translate HOMER and to translate DOROTHY to GROUP1 TYPE O O AUTHID HOMER DOROTHY LINKNAME (blank) (blank) NEWAUTHID (blank) GROUP1 PASSWORD HOMERSPW GRP1PASS
You can also use a single row to specify that all IDs that accompany requests to a single remote system must be translated. For example, if every one of your IDs is to be translated to THEIRS on requests to LUYYY, specify in SYSIBM.USERNAMES the values that are shown in Table 82.
Table 82. Values in SYSIBM. USERNAMES to translate every ID to THEIRS TYPE O AUTHID (blank) LINKNAME LUYYY NEWAUTHID THEIR PASSWORD THEPASS
# # # # # #
If the ICSF is installed and properly configured, you can use the DSNLEUSR stored procedure to encrypt the translated outbound IDs that are specified in the NEWAUTHID column of SYSIBM.USERNAMES. DB2 decrypts the translated outbound IDs during connection processing. For more information about the DSNLEUSR stored procedure, see Appendix J, DB2-supplied stored procedures, on page 1261.
Sending passwords
Recommendation: For the tightest security, do not send passwords through the network. Instead, use one of the following security mechanisms: v RACF encrypted passwords, described in Sending RACF encrypted passwords on page 263 v RACF PassTickets, described in Sending RACF PassTickets on page 263 v Kerberos tickets, described in Establishing Kerberos authentication through RACF on page 278 v DRDA encrypted passwords or DRDA encrypted user IDs with encrypted passwords, described in Sending encrypted passwords from workstation clients on page 263 If send passwords through the network, you can put the password for an ID in the PASSWORD column of SYSIBM.USERNAMES. # # # # # # Recommendation: Use the DSNLEUSR stored procedure to encrypt passwords in SYSIBM.USERNAMES. If the ICSF is installed and properly configured, you can use the DSNLEUSR stored procedure to encrypt passwords in the SYSIBM.USERNAMES table. DB2 decrypts the password during connection processing. For more information about the DSNLEUSR stored procedure, see Appendix J, DB2-supplied stored procedures, on page 1261.
262
Administration Guide
DB2 UDB for z/OS allows the use of RACF encrypted passwords or RACF PassTickets. However, workstations, such as Windows workstations, do not support these security mechanisms. RACF encrypted passwords are not a secure mechanism because they can be replayed. Recommendation: Do not use RACF encrypted passwords unless you are connecting to a previous release of DB2 UDB for z/OS.
The partner DB2 must also specify password encryption in its SYSIBM.LUNAMES table. Both partners must register each ID and its password with RACF. Then, for every request to LUXXX, your DB2 calls RACF to supply an encrypted password to accompany the ID. With password encryption, you do not use the PASSWORD column of SYSIBM.USERNAMES, so the security of that table becomes less critical.
2. Define profiles for the remote systems by entering the name of each remote system as it appears in the LINKNAME column of table SYSIBM.LOCATIONS. For example, the following command defines a profile for a remote system, DB2A, in the RACF PTKTDATA class:
RDEFINE PTKTDATA DB2A SSIGNON(KEYMASKED(E001193519561977))
3. Refresh the RACF PTKTDATA definition with the new profile by issuing the following command:
SETROPTS RACLIST(PTKTDATA) REFRESH
See z/OS Security Server Security Administrators Guide for more information about RACF PassTickets.
263
# # # # # # # # # # # # # # # # # #
enable the DB2 UDB for z/OS AES server support, you must install and configure z/OS Integrated Cryptographic Services Facility (ICSF). During DB2 startup, DSNXINIT invokes the MVS LOAD macro service to load various ICSF services, including the ICSF CSNESYE and CSNESYD modules that DB2 calls for processing AES encryption and decryption requests. If ICSF is not installed or if ICSF services are not available, DB2 will not be able to provide AES support. If a client does not explicitly requests AES, DB2 UDB for z/OS uses the default DES encryption algorithm for processing remote requests. To use the DES encryption and decryption, you must install and configure z/OS ICSF. You can enable DB2 Connect to send encrypted passwords by setting database connection services (DCS) authentication to DCS_ENCRYPT in the DCS directory entry. When a client application issues an SQL CONNECT, the client negotiates this support with the database server. If supported, a shared private key is generated by the client and server using the Diffie-Hellman public key technology and the password is encrypted using 56-bit DES with the shared private key. The encrypted password is non-replayable, and the shared private key is generated on every connection. If the server does not support password encryption, the application receives SQLCODE -30073 (DRDA security manager level 6 is not supported).
DB2
...Other groups... This ID owns, and is connected to, group DB2 The group of all DB2 IDs
DSNCnn0
DSNnn0
...Other aliases...
DB2USER
DB2SYS
GROUP1
GROUP2
SYSADM
SYSOPR
SYSDSP
USER2
USER3
USER4
Figure 24 shows some of the relationships among the names that are shown in Table 84.
Table 84. RACF relationships RACF ID SYS1 DB2 DB2OWNER DSNC810 Use Major RACF group ID DB2 group Owner of the DB2 group Group to control databases and recovery logs
264
Administration Guide
Table 84. RACF relationships (continued) RACF ID DSN810 DB2USER SYSADM SYSOPR DB2SYS, GROUP1, GROUP2 SYSDSP USER1, USER2, USER3 Use Group to control installation data sets Group of all DB2 users ID with DB2 installation SYSADM authority ID with DB2 installation SYSOPR authority RACF group names RACF user ID for DB2 started tasks RACF user IDs.
Note: These RACF group names and user IDs do not appear in the figure; they are listed in Table 85 on page 268.
To establish RACF protection for DB2, perform the steps that are described in the following sections: v Defining DB2 resources to RACF includes steps that tell RACF what to protect. v Permitting RACF access on page 267 includes steps that make the protected resources available to processes. Some steps are required and some steps are optional, depending on your circumstances. All steps presume that RACF is already installed. The steps do not need to be taken strictly in the order in which they are shown here. For a more thorough description of RACF facilities, see z/OS Security Server Security Administrators Guide. For information about using RACF for multilevel security with row-level granularity, see Multilevel security on page 192.
265
BATCH for all others, including TSO, CAF, batch, all utility jobs, DB2-established stored procedures address space, and requests that come through the call attachment facility. To control access, you need to define a profile, as a member of class DSNR, for every combination of subsystem and environment you want to use. For example, suppose that you want to access: v Subsystem DSN from TSO and DDF v Subsystem DB2P from TSO, DDF, IMS, and RRSAF v Subsystem DB2T from TSO, DDF, CICS, and RRSAF Then define the profiles with the following names:
DSN.BATCH DB2P.BATCH DB2T.BATCH DSN.DIST DB2P.DIST DB2T.DIST DB2P.MASS DB2T.SASS DB2P.RRSAF DB2T.RRSAF
You can do that with a single RACF command, which also names an owner for the resources:
RDEFINE DSNR (DSN.BATCH DSN.DIST DB2P.BATCH DB2P.DIST DB2P.MASS DB2P.RRSAF DB2T.BATCH DB2T.DIST DB2T.SASS DB2T.RRSAF) OWNER(DB2OWNER)
In order to access a subsystem in a particular environment, a user must be on the access list of the corresponding profile. You add users to the access list with the RACF PERMIT command. If you do not want to limit access to particular users or groups, you can give universal access to a profile with a command like this:
RDEFINE DSNR (DSN.BATCH) OWNER(DB2OWNER) UACC(READ)
Only users with the SPECIAL attribute can issue the command. If you are using stored procedures in a WLM-established address space, you might also need to enable RACF checking for the SERVER class. See Step 2: Control access to WLM (optional) on page 275.
266
Administration Guide
267
do not need to match those that are used for the DB2 address spaces, but they must be authorized to run the call attachment facility (for the DB2-established stored procedures address space) or Resource Recovery Services attachment facility (for WLM-established stored procedures address spaces). Note: WLM-established stored procedures started tasks IDs require an OMVS segment. Changing the RACF started-procedures table: To change the RACF started-procedures table (ICHRIN03), change, reassemble, and link edit the resulting object code to z/OS. Figure 25 on page 269 shows the sample entries for three DB2 subsystems and optional entries for CICS and IMS. (Refer to z/OS Security Server RACF System Programmer's Guide for a description of how to code a RACF started-procedures table.) The example provides the DB2 started tasks for each of three DB2 subsystems, named DSN, DB2T, and DB2P, and for CICS and an IMS control region. The IDs and group names associated with the address spaces are shown in Table 85.
Table 85. DB2 address space IDs and associated RACF user IDs and group names Address Space DSNMSTR DSNDBM1 DSNDIST DSNSPAS DSNWLM DB2TMSTR DB2TDBM1 DB2TDIST DB2TSPAS DB2PMSTR DB2PDBM1 DB2PDIST DB2PSPAS CICSSYS IMSCNTL RACF User ID SYSDSP SYSDSP SYSDSP SYSDSP SYSDSP SYSDSPT SYSDSPT SYSDSPT SYSDSPT SYSDSPD SYSDSPD SYSDSPD SYSDSPD CICS IMS RACF Group Name DB2SYS DB2SYS DB2SYS DB2SYS DB2SYS DB2TEST DB2TEST DB2TEST DB2TEST DB2PROD DB2PROD DB2PROD DB2PROD CICSGRP IMSGRP
Figure 25 on page 269 shows a sample job that reassembles and link edits the RACF started-procedures table (ICHRIN03):
268
Administration Guide
REASSEMBLE AND LINKEDIT THE RACF STARTED-PROCEDURES TABLE ICHRIN03 TO INCLUDE USERIDS AND GROUP NAMES FOR EACH DB2 CATALOGED PROCEDURE. OPTIONALLY, ENTRIES FOR AN IMS OR CICS SYSTEM MIGHT BE INCLUDED. AN IPL WITH A CLPA (OR AN MLPA SPECIFYING THE LOAD MODULE) IS REQUIRED FOR THESE CHANGES TO TAKE EFFECT.
ENTCOUNT DC AL2(((ENDTABLE-BEGTABLE)/ENTLNGTH)+32768) * NUMBER OF ENTRIES AND INDICATE RACF FORMAT * * PROVIDE FOUR ENTRIES FOR EACH DB2 SUBSYSTEM NAME. * BEGTABLE DS 0H * ENTRIES FOR SUBSYSTEM NAME "DSN" DC CL8'DSNMSTR' SYSTEM SERVICES PROCEDURE DC CL8'SYSDSP' USERID DC CL8'DB2SYS' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES ENTLNGTH EQU *-BEGTABLE CALCULATE LENGTH OF EACH ENTRY DC CL8'DSNDBM1' DATABASE SERVICES PROCEDURE DC CL8'SYSDSP' USERID DC CL8'DB2SYS' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DSNDIST' DDF PROCEDURE DC CL8'SYSDSP' USERID DC CL8'DB2SYS' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DSNSPAS' STORED PROCEDURES PROCEDURE DC CL8'SYSDSP' USERID DC CL8'DB2SYS' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DSNWLM' WLM-ESTABLISHED S.P. ADDRESS SPACE DC CL8'SYSDSP' USERID DC CL8'DB2SYS' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES * ENTRIES FOR SUBSYSTEM NAME "DB2T" DC CL8'DB2TMSTR' SYSTEM SERVICES PROCEDURE DC CL8'SYSDSPT' USERID DC CL8'DB2TEST' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DB2TDBM1' DATABASE SERVICES PROCEDURE DC CL8'SYSDSPT' USERID DC CL8'DB2TEST' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DB2TDIST' DDF PROCEDURE DC CL8'SYSDSPT' USERID DC CL8'DB2TEST' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DB2TSPAS' STORED PROCEDURES PROCEDURE DC CL8'SYSDSPT' USERID DC CL8'DB2TEST' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES Figure 25. Sample job to reassemble the RACF started-procedures table (Part 1 of 2)
269
ENTRIES FOR SUBSYSTEM NAME "DB2P" DC CL8'DB2PMSTR' SYSTEM SERVICES PROCEDURE DC CL8'SYSDSPD' USERID DC CL8'DB2PROD' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DB2PDBM1' DATABASE SERVICES PROCEDURE DC CL8'SYSDSPD' USERID DC CL8'DB2PROD' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DB2PDIST' DDF PROCEDURE DC CL8'SYSDSPD' USERID DC CL8'DB2PROD' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DB2PSPAS' STORED PROCEDURES PROCEDURE DC CL8'SYSDSPD' USERID DC CL8'DB2PROD' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES * OPTIONAL ENTRIES FOR CICS AND IMS CONTROL REGION DC CL8'CICSSYS' CICS PROCEDURE NAME DC CL8'CICS' USERID DC CL8'CICSGRP' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'IMSCNTL' IMS CONTROL REGION PROCEDURE DC CL8'IMS' USERID DC CL8'IMSGRP' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES ENDTABLE DS 0D END Figure 25. Sample job to reassemble the RACF started-procedures table (Part 2 of 2)
That gives class authorization to DB2OWNER for DSNR and USER. DB2OWNER can add users to RACF and issue the RDEFINE command to define resources in class DSNR. DB2OWNER has control over and responsibility for the entire DB2 security plan in RACF. The RACF group SYS1 already exists. To add group DB2 and make DB2OWNER its owner, issue the following RACF command:
ADDGROUP DB2 SUPGROUP(SYS1) OWNER(DB2OWNER)
To connect DB2OWNER to group DB2 with the authority to create new subgroups, add users, and manipulate profiles, issue the following RACF command:
CONNECT DB2OWNER GROUP(DB2) AUTHORITY(JOIN) UACC(NONE)
To make DB2 the default group for commands issued by DB2OWNER, issue the following RACF command:
ALTUSER DB2OWNER DFLTGRP(DB2)
270
Administration Guide
To create the group DB2USER and add five users to it, issue the following RACF commands:
ADDGROUP DB2USER SUPGROUP(DB2) ADDUSER (USER1 USER2 USER3 USER4 USER5) DFLTGRP(DB2USER)
To define a user to RACF, use the RACF ADDUSER command. That invalidates the current password. You can then log on as a TSO user to change the password. DB2 considerations when using RACF groups: v When a user is newly connected to, or disconnected from, a RACF group, the change is not effective until the next logon. Therefore, before using a new group name as a secondary authorization ID, a TSO user must log off and log on, or a CICS or IMS user must sign on again. v A user with the SPECIAL, JOIN, or GROUP-SPECIAL RACF attribute can define new groups with any name that RACF accepts and can connect any user to them. Because the group name can become a secondary authorization ID, you should control the use of those RACF attributes. | | | | | | | | | | v Existing RACF group names can duplicate existing DB2 authorization IDs. That duplication is unlikely for the following reasons: A group name cannot be the same as a user name. Authorization IDs that are known to DB2 are usually known to RACF. However, you can create a table with an owner name that is the same as a RACF group name and use the IBM-supplied sample connection exit routine. Then any TSO user with the group name as a secondary ID has ownership privileges on the table. You can prevent that situation by designing the connection exit routine to stop unwanted group names from being passed to DB2.
Defining profiles for IMS and CICS: You want the IDs for attaching systems to use the appropriate access profile. For example, to let the IMS user ID use the access profile for IMS on system DB2P, issue the following RACF command:
PERMIT DB2P.MASS CLASS(DSNR) ID(IMS) ACCESS(READ)
To let the CICS group ID use the access profile for CICS on system DB2T, issue the following RACF command:
PERMIT DB2T.SASS CLASS(DSNR) ID(CICSGRP) ACCESS(READ)
Providing installation authorities to default IDs: When DB2 is installed, IDs are named to have special authoritiesone or two IDs for SYSADM and one or two IDs for SYSOPR. Those IDs can be connected to the group DB2USER; if they are not, you need to give them access. The next command permits the default IDs for the SYSADM and SYSOPR authorities to use subsystem DSN through TSO:
PERMIT DSN.BATCH CLASS(DSNR) ID(SYSADM,SYSOPR) ACCESS(READ)
Chapter 11. Controlling access to a DB2 subsystem
271
IDs also can be group names. Using secondary IDs: You can use secondary authorization IDs to define a RACF group. After you define the RACF group, you can assign privileges to it that are shared by multiple primary IDs. For example, suppose that DB2OWNER wants to create a group GROUP1 and to give the ID USER1 administrative authority over the group. USER1 should be able to connect other existing users to the group. To create the group, DB2OWNER issues this RACF command:
ADDGROUP GROUP1 OWNER(USER1) DATA('GROUP FOR DEPT. G1')
To let the group connect to the DSN system through TSO, DB2OWNER issues this RACF command:
PERMIT DSN.BATCH CLASS(DSNR) ID(GROUP1) ACCESS(READ)
USER1 can now connect other existing IDs to the group GROUP1 by using the RACF CONNECT command:
CONNECT (USER2 EPSILON1 EPSILON2) GROUP(GROUP1)
If you add or update secondary IDs for CICS transactions, you must start and stop the CICS attachment facility to ensure that all threads sign on and get the correct security information. Allowing users to create data sets: Chapter 13, Auditing, on page 285 recommends using RACF to protect the data sets that store DB2 data. If you use that method, when you create a new group of DB2 users, you might want to connect it to a group that can create data sets. To allow USER1 to create and control data sets, DB2OWNER creates a generic profile and permits complete control to USER1 and to the four administrators. The SYSDSP parameter also gives control to DB2. See See Creating generic profiles for data sets on page 281.
ADDSD 'DSNC810.DSNDBC.ST*' UACC(NONE)
Allowing access from remote requesters: The recommended way of controlling access from remote requesters is to use the DSNR RACF class with a PERMIT command to access the distributed data address space (such as DSN.DIST). The following RACF commands let the users in the group DB2USER access DDF on the DSN subsystem. These DDF requests can originate from any partner in the network. Example: To permit READ access on profile DSN.DIST in the DSNR class to DB2USER, issue the following RACF command:
PERMIT DSN.DIST CLASS(DSNR) ID(DB2USER) ACCESS(READ)
If you want to ensure that a specific user can access only when the request originates from a specific LU name, you can use WHEN(APPCPORT) on the PERMIT command. Example: To permit access to DB2 distributed processing on subsystem DSN when the request comes from USER5 at LUNAME equal to NEWYORK, issue the following RACF command:
PERMIT DSN.DIST CLASS(DSNR) ID(USER5) ACCESS(READ) + WHEN(APPCPORT(NEWYORK))
272
Administration Guide
| | | | | | | | | | | | | | | | | |
For connections that come through TCP/IP, use the RACF APPCPORT class, the RACF SERVAUTH class, or both classes, with TCP/IP Network Access Control to protect unauthorized access to DB2. Example: To use the RACF APPCPORT class, perform the following steps: 1. Activate the ACCPORT class by issuing the following RACF command:
SETROPTS CLASSACT(APPCPORT) REFRESH
2. Define the general resource profile and name it TCPIP. Specify NONE for universal access and APPCPORT for class. Issue the following RACF command:
RDEFINE APPCPORT (TCPIP) UACC(NONE)
3. Permit READ access on profile TCPIP in the APPCPORT class. To permit READ access to USER5, issue the following RACF command:
PERMIT TCPIP ACCESS(READ) CLASS(APPCPORT) ID(USER5)
4. Permit READ access on profile DSN.DIST in the DSNR class. To permit READ access to USER5, issue the following RACF command:
PERMIT DSN.DIST CLASS(DSNR) ID(USER5) ACCESS(READ) + WHEN(APPCPORT(TCPIP))
If the RACF APPCPORT class is active on your system, and a resource profile for the requesting LU name already exists, you must permit READ access to the APPCPORT resource profile for the user IDs that DB2 uses. You must permit READ access even when you are using the DSNR resource class. Similarly, if you are using the RACF APPL class and that class restricts access to the local DB2 LU name or generic LU name, you must permit READ access to the APPL resource for the user IDs that DB2 uses. Requirement: To use the RACF SERVAUTH class and TCP/IP Network Access Control, you must have z/OS V1.5 (or later) installed. | | | | | | | | | | | | | | | | | | | | | | | Example: To use the RACF SERVAUTH class and TCP/IP Network Access Control, perform the following steps: 1. Set up and configure TCP/IP Network Access Control by using the NETACCESS statement that is in your TCP/IP profile. For example, suppose that you need to allow z/OS system access only to IP addresses from 9.0.0.0 to 9.255.255.255. You want to define these IP addresses as a security zone, and you want to name the security zone IBM. Suppose also that you need to deny access to all IP addressed outside of the IBM security zone, and that you want to define these IP addresses as a separate security zone. You want to name this second security zone WORLD. To establish these security zones, use the following NETACCESS clause:
NETACCESS INBOUND OUTBOUND ; NETWORK/MASK SAF 9.0.0.0/8 IBM DEFAULT WORLD ENDNETACCESS
Now, suppose that USER5 has an IP address of 9.1.2.3. TCP/IP Network Access Control would determine that USER5 has an IP address that belongs to the IBM security zone. USER5 would be granted access to the system. Alternatively, suppose that USER6 has an IP address of 1.1.1.1. TCP/IP Network Access Control would determine that USER6 has an IP address that belongs to the WORLD security zone. USER6 would not be granted access to the system. 2. Activate the SERVAUTH class by issuing the following TSO command:
Chapter 11. Controlling access to a DB2 subsystem
273
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
SETROPTS CLASSACT(SERVAUTH)
3. Activate RACLIST processing for the SERVAUTH class by issuing the following TSO command:
SETROPTS RACLIST(SERVAUTH)
4. Define the IBM and WORLD general resource profiles in RACF to protect the IBM and WORLD security zones by issuing the following commands:
RDEFINE SERVAUTH (EZB.NETACCESS.ZOSV1R5.TCPIP.IBM) UACC(NONE) RDEFINE SERVAUTH (EZB.NETACCESS.ZOSV1R5.TCPIP.WORLD) UACC(NONE)
5. Permit USER5 and SYSDSP read access to the IBM profile by using the following commands.
PERMIT EZB.NETACCESS.ZOSV1R5.TCPIP.IBM ACCESS READ CLASS(SERVAUTH) ID(USER5) PERMIT EZB.NETACCESS.ZOSV1R5.TCPIP.IBM ACCESS READ CLASS(SERVAUTH) ID(SYSDSP)
6. Permit SYSDSP read access to the WORLD profile by using the following command:
PERMIT EZB.NETACCESS.ZOSV1R5.TCPIP.WORLD ACCESS READ CLASS(SERVAUTH) ID(USER5)
7. For these permissions to take effect, refresh the RACF database by using the following command:
SETROPTS CLASSACT(SERVAUTH) REFRESH RACLIST(SERVAUTH)
For more information about the NETACCESS statement, see z/OS V1.5 Communications Server: IP Configuration Reference.
274
Administration Guide
The user ID that is associated with the WLM-established stored procedures address space must be authorized to run Resource Recovery Services attachment facility (RRSAF). The user id is also associated with the ssnm.RRSAF profile that you create. Control access to the DB2 subsystem through RRSAF by performing the following steps: 1. If you have not already established a profile for controlling access from the RRS attachment facility as described in Define the names of protected access profiles on page 266, define ssnm.RRSAF in the DSNR resource class with a universal access authority of NONE, as shown in the following command:
RDEFINE DSNR (DB2P.RRSAF DB2T.RRSAF) UACC(NONE)
3. Add user IDs that are associated with the stored procedures address spaces to the RACF Started Procedures Table, as shown in this example:
. . . DC DC DC DC DC . . .
WLM-ESTABLISHED S.P. ADDRESS SPACE USERID GROUP NAME NO PRIVILEGED ATTRIBUTE RESERVED BYTES
4. Give read access to ssnm.RRSAF to the user ID that is associated with the stored procedures address space:
PERMIT DB2P.RRSAF CLASS(DSNR) ID(SYSDSP) ACCESS(READ)
applenv is the name of the application environment that is associated with the stored procedure. See Assigning procedures and functions to WLM application environments on page 1024 for more information about application environments. Assume that you want to define the following profile names: v DB2.DB2T.TESTPROC v DB2.DB2P.PAYROLL v DB2.DB2P.QUERY Use the following RACF command:
Chapter 11. Controlling access to a DB2 subsystem
275
4. Permit read access to the server resource name to the user IDs that are associated with the stored procedures address space as follows.
PERMIT PERMIT PERMIT DB2.DB2T.TESTPROC CLASS(SERVER) ID(SYSDSP) ACCESS(READ) DB2.DB2P.PAYROLL CLASS(SERVER) ID(SYSDSP) ACCESS(READ) DB2.DB2P.QUERY CLASS(SERVER) ID(SYSDSP) ACCESS(READ)
Control of stored procedures in a WLM environment: Programs can be grouped together and isolated in different WLM environments based on application security requirements. For example, payroll applications might be isolated in one WLM environment because they contain sensitive data, such as employee salaries. To prevent users from creating a stored procedure in a sensitive WLM environment, DB2 invokes RACF to determine if the user is allowed to create stored procedures in the specified WLM environment. The WLM ENVIRONMENT keyword on the CREATE PROCEDURE statement identifies the WLM environment to use for running a given stored procedure. Attempts to create a stored procedure fail if the user is not properly authorized. DB2 performs a resource authorization check using the DSNR RACF class as follows: v In a DB2 data sharing environment, DB2 uses the following RACF resource name:
db2_groupname.WLMENV.wlm_environment
v In a non-data sharing environment, DB2 checks the following RACF resource name:
db2_subsystem_id.WLMENV.wlm_environment
You can use the RACF RDEFINE command to create RACF profiles that prevent users from creating stored procedures and user-defined functions in sensitive WLM environments. For example, you can prevent all users on DB2 subsystem DB2A (non-data sharing) from creating a stored procedure or user-defined function in the WLM environment named PAYROLL; to do this, use the following command:
RDEFINE DSNR (DB2A.WLMENV.PAYROLL) UACC(NONE)
The RACF PERMIT command authorizes a user to create a stored procedure or user-defined function in a WLM environment. For example, you can authorize a DB2 user (DB2USER1) to create stored procedures on DB2 subsystem DB2A (non-data sharing) in the WLM environment named PAYROLL:
PERMIT DB2A.WLMENV.PAYROLL CLASS(DSNR) ID(DB2USER1) ACCESS(READ)
Control of stored procedures in a DB2-established stored procedures address space: DB2 invokes RACF to determine if a user is allowed to create a stored procedures in a DB2-established stored procedure address space. The NO WLM ENVIRONMENT keyword on the CREATE PROCEDURE statement indicates that a given stored procedure will run in a DB2-established stored procedures address space. Attempts to create a procedure fail if the user is not authorized, or if there is no DB2-established stored procedures address space exists. The RACF PERMIT command authorizes a user to create a stored procedure in a DB2-established stored procedures address space. For example, you can authorize a DB2 user (DB2USER1) to create stored procedures on DB2 subsystem DB2A in the stored procedures address space named DB2ASPAS:
276
Administration Guide
PERMIT
DB2A.WLMENV.DB2ASPAS
CLASS(DSNR) ID(DB2USER1)
ACCESS(READ)
User ID=zzzz
. . .
. . .
Program B
For WLM-established stored procedures address spaces, enable the RACF check for the caller's ID when you want to access non-DB2 resources by performing the following steps: 1. Use the ALTER PROCEDURE statement with the SECURITY USER clause. 2. Ensure that the ID of the stored procedure's caller has RACF authority to the resources. 3. For the best performance, cache the RACF profiles in the virtual look-aside facility (VLF) of z/OS. Do this by specifying the following keywords in the COFVLFxx member of library SYS1.PARMLIB.
CLASS NAME(IRRACEE) EMAJ(ACEE)
277
To give root authority to the DDF address space, you must specify a UID of 0.
2. Define local principals to RACF. Change RACF passwords before the principals become active Kerberos users. Define a Kerberos principal with the following commands:
AU RONTOMS KERB(KERBNAME(rontoms)) ALU RONTOMS PASSWORD(new1pw) NOEXPIRE
278
Administration Guide
3. Map foreign Kerberos principals by defining KERBLINK profiles to RACF with a command similar to the following command:
RDEFINE KERBLINK /.../KERB390.ENDICOTT.IBM.COM/RWH APPLDATA('RONTOMS')
You must also define a principal name for the user ID used in the ssnmDIST started task address space. This step is required because the ssnmDIST address space must have the RACF authority to use its SAF ticket parsing service.
ALU SYSDSP PASSWORD(pw) NOEXPIRE KERB(KERBNAME(SYSDSP))
In this example, the user ID that is used for the ssnmDIST started task address space is SYSDSP. See Define RACF user IDs for DB2 started tasks on page 267 for more information, including how to determine the user ID for the ssnmDIST started task. 4. Define foreign Kerberos authentication servers to the local Kerberos authentication server by using REALM profiles. You must supply a password for the key to be generated. REALM profiles define the trust relationship between the local realm and the foreign Kerberos authentication servers. PASSWORD is a required keyword, so all REALM profiles have a KERB segment. The command is similar to the following command:
RDEFINE REALM /.../KERB390.ENDICOTT.IBM.COM/KRBTGT/KER2000.ENDICOTT.IBM.COM + KERB(PASSWORD(realm0pw))
| | | | |
The z/OS SecureWay Kerberos Security Server rejects ticket requests from users with revoked or expired passwords; therefore, plan password resets that use a method that avoids a password change at a subsequent logon. For example, use the TSO logon panel the PASSWORD command without a specified ID operand, or the ALTUSER command with NOEXPIRE specified. Data sharing environment: Data sharing Sysplex environments that use Kerberos security must have a Kerberos Security Server instance running on each system in the Sysplex. The instances must either be in the same realm and share the same RACF database, or have different RACF databases and be in different realms.
279
280
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
281
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
v For table spaces and index spaces, issue the following commands:
ADDSD 'DSNC810.DSNDBC.*' UACC(NONE) PERMIT 'DSNC810.DSNDBC.*' ID(SYSDSP) ACCESS(ALTER)
Started tasks do not need control. v For other general data sets, issue the following commands:
ADDSD 'DSNC810.*' PERMIT 'DSNC810.*' UACC(NONE) ID(SYSDSP) ACCESS(ALTER)
Although all of those commands are not absolutely necessary, the sample shows how you can create generic profiles for different types of data sets. Some parameters, such as universal access, could vary among the types. In the example, installation data sets (DSN810.*) are universally available for read access. If you use generic profiles, specify NO on installation panel DSNTIPP for ARCHIVE LOG RACF, or you might get a z/OS error when DB2 tries to create the archive log data set. If you specify YES, DB2 asks RACF to create a separate profile for each archive log that is created, which means that you cannot use generic profiles for these data sets. To protect VSAM data sets, use the cluster name. You do not need to protect the data component names, because the cluster name is used for RACF checking. Access by stand-alone DB2 utilities: The following DB2 utilities access objects that are outside of DB2 control: v DSN1COPY and DSN1PRNT: table space and index space data sets v DSN1LOGP: active logs, archive logs, and bootstrap data sets v DSN1CHKR: DB2 directory and catalog table spaces v Change Log Inventory (DSNJU003) and Print Log Map (DSNJU004): bootstrap data sets The Change Log Inventory and Print Log Map utilities run as batch jobs that are protected by the USER and PASSWORD options on the JOB statement. To provide a value for the USER option, for example SVCAID, issue the following commands: v For DSN1COPY:
PERMIT 'DSNC810.*' ID(SVCAID) ACCESS(CONTROL)
v For DSN1PRNT:
PERMIT 'DSNC810.*' ID(SVCAID) ACCESS(READ)
v For DSN1LOGP:
PERMIT 'DSNC810.LOGCOPY*' ID(SVCAID) ACCESS(READ) PERMIT 'DSNC810.ARCHLOG*' ID(SVCAID) ACCESS(READ) PERMIT 'DSNC810.BSDS*' ID(SVCAID) ACCESS(READ)
v For DSN1CHKR:
PERMIT 'DSNC810.DSNDBDC.*' ID(SVCAID) ACCESS(READ)
282
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
The level of access depends on the intended use, not on the type of data set (VSAM KSDS, VSAM linear, or sequential). For update operations, ACCESS(CONTROL) is required; for read-only operations, ACCESS(READ) is sufficient. You can use RACF to permit programs, rather than user IDs, to access objects. When you use RACF in this manner, IDs that are not authorized to access the log data sets might be able to do so by running the DSN1LOGP utility. Permit access to database data sets through DSN1PRNT or DSN1COPY.
The next two commands connect those IDs to the groups that control data sets, with the authority to create new RACF database profiles. The ID that has installation SYSOPR authority (SYSOPR) does not need that authority for the installation data sets.
CONNECT (SYSADM SYSOPR) CONNECT (SYSADM) GROUP(DSNC810) AUTHORITY(CREATE) UACC(NONE) GROUP(DSN810) AUTHORITY(CREATE) UACC(NONE)
The following set of commands gives the IDs complete control over DSNC810 data sets. The system administrator IDs also have complete control over the installation libraries. Additionally, you can give the system programmer IDs the same control.
PERMIT PERMIT PERMIT PERMIT PERMIT PERMIT 'DSNC810.LOGCOPY*' 'DSNC810.ARCHLOG*' 'DSNC810.BSDS*' 'DSNC810.DSNDBC.*' 'DSNC810.*' 'DSN810.*' ID(SYSADM SYSOPR) ID(SYSADM SYSOPR) ID(SYSADM SYSOPR) ID(SYSADM SYSOPR) ID(SYSADM SYSOPR) ID(SYSADM) ACCESS(ALTER) ACCESS(ALTER) ACCESS(ALTER) ACCESS(ALTER) ACCESS(ALTER) ACCESS(ALTER)
Those IDs can now explicitly create data sets whose names have DSNC810 as the high-level qualifier. Any such data sets that are created by DB2 or by these RACF user IDs are protected by RACF. Other RACF user IDs are prevented by RACF from creating such data sets. If no option is supplied for PASSWORD on the ADDUSER command that adds those IDs, the first password for the new IDs is the name of the default group, DB2USER. The first time that the IDs sign on, they all use that password, but they must change the password during their first session.
283
284
Administration Guide
285
v Destinations for audit records You can choose whether to audit the activity on a table by specifying an option of the CREATE and ALTER statements.
If you did not saved the START command, you can determine the trace number and stop the trace by its number. Use DISPLAY TRACE to find the number. Example: DISPLAY TRACE (AUDIT) might return a message like the following output:
TNO 01 02 TYPE AUDIT AUDIT CLASS 01 04,06 DEST SMF GTF QUAL NO YES
The message indicates that two audit traces are active. Trace 1 traces events in class 1 and sends records to the SMF data set. Trace 1 can be a trace that starts automatically whenever DB2 starts. Trace 2 traces events in classes 4 and 6 and sends records to GTF. You can stop either trace by using its identifying number (TNO). Example: To stop trace 1, use the following command:
-STOP TRACE AUDIT TNO(1)
286
Administration Guide
8 9
287
v The auditing that is described in this chapter takes place only when the audit trace is on. v The trace does not record old data after it is changed because the log records old data. v If an agent or transaction accesses a table more than once in a single unit of recovery, the audit trace records only the first access. v The audit trace does not record accesses if you do not start the audit trace for the appropriate class of events. v Except class 8, the audit trace does not audit certain utilities. The trace audits the first access of a table with the LOAD utility, but it does not audit access by the COPY, RECOVER, and REPAIR utilities. The audit trace does not audit access by stand-alone utilities, such as DSN1CHKR and DSN1PRNT. v The trace audits only the tables that you specifically choose to audit. v You cannot audit access to auxiliary tables. v You cannot audit the catalog tables because you cannot create or alter catalog tables. This auditing coverage is consistent with the goal of providing a moderate volume of audit data with a low impact on performance. However, when you choose classes of events to audit, consider that you might ask for more data than you are willing to process.
Because this statement includes the AUDIT CHANGES option, DB2 audits the table for each access that inserts, updates, or deletes data (trace class 4).
288
Administration Guide
Example: To also audit the table for read accesses (class 5), issue the following statement:
ALTER TABLE DSN8810.DEPT AUDIT ALL;
The statement is effective regardless of whether the table was previously chosen for auditing. Example: To prevent all auditing of the table, issue the following statement:
ALTER TABLE DSN8810.DEPT AUDIT NONE;
For the CREATE TABLE statement, the default audit option is NONE. For the ALTER TABLE statement, no default option exists. If you do not use the AUDIT clause in an ALTER TABLE statement, the audit option for the table is unchanged. When CREATE TABLE statements or ALTER TABLE statements affect the audit of a table, you can audit those statements. However, the results of those audits are in audit class 3, not in class 4 or class 5. Use audit class 3 to determine whether auditing was turned off for a table for an interval of time. If an ALTER TABLE statement turns auditing on or off for a specific table, any plans and packages that use the table are invalidated and must be rebound. If you change the auditing status, the change does not affect plans, packages, or dynamic SQL statements that are currently running. The change is effective only for plans, packages, or dynamic SQL statements that begin running after the ALTER TABLE statement has completed.
289
primary ID identifies a unique user, individual accountability is possible. However, if several users share the same primary ID, you cannot tell which user issues a particular GRANT statement or runs a particular application plan.
290
Administration Guide
In those circumstances, SMF records the number of records that are lost. z/OS provides an option to stop the system rather than to lose SMF data.
291
292
Administration Guide
constraints to existing authorization checks. With it, you control how specific plans or collections of packages can use data definition statements. Read Chapter 10, Controlling access through a closed application, on page 215 for a description of this function. To determine whether data definition control is active, look at option 1 on the DSNTIPZ installation panel.
293
As an auditor, you might check that the table definition expresses required constraints on column values as table check constraints. For a full description of the rules for those constraints, see the CREATE TABLE information in Chapter 5 of DB2 SQL Reference. An alternative technique is to create a view with the check option, and then insert or update values only through that view. Example: Suppose that, in table T, data in column C1 must be a number between 10 and 20. Suppose also that data in column C2 is an alphanumeric code that must begin with A or B. Create view V1 with the following statement:
CREATE VIEW V1 AS SELECT * FROM T WHERE C1 BETWEEN 10 AND 20 AND (C2 LIKE 'A%' OR C2 LIKE 'B%') WITH CHECK OPTION;
Because of the CHECK OPTION, view V1 allows only data that satisfies the WHERE clause. You cannot use the LOAD utility with a view, but that restriction does not apply to user-written exit routines. Several types of user-written routines are pertinent here: Validation routines You can use validation routines to validate data values. Validation routines access an entire row of data, check the current plan name, and return a nonzero code to DB2 to indicate an invalid row. Edit routines Edit routines have the same access as validation routines, and can also change the row that is to be inserted. Auditors typically use edit routines to encrypt data and to substitute codes for lengthy fields. However, edit routines can also validate data and return nonzero codes. Field procedures Field procedures access data that is intended for a single column; they apply only to short-string columns. However, they accept input parameters, so generalized procedures are possible. A column that is defined with a field procedure can be compared only to another column that uses the same procedure. See Appendix B, Writing exit routines, on page 1055 for information about using exit routines.
294
Administration Guide
Example: For static SQL statements, use the following query to determine which packages use UR isolation:
SELECT DISTINCT Y.COLLID, Y.NAME, Y.VERSION FROM SYSIBM.SYSPACKAGE X, SYSIBM.SYSPACKSTMT Y WHERE (X.LOCATION = Y.LOCATION AND X.LOCATION = ' ' AND X.COLLID = Y.COLLID AND X.NAME = Y.NAME AND X.VERSION = Y.VERSION AND X.ISOLATION = 'U') OR Y.ISOLATION = 'U' ORDER BY Y.COLLID, Y.NAME, Y.VERSION;
For dynamic SQL statements, turn on performance trace class 3 to determine which plans and packages use UR isolation. For more information about locks and concurrency, see Chapter 31, Improving concurrency, on page 813.
Chapter 13. Auditing
295
Consistency between systems: When an application program writes data to both DB2 and IMS, or to both DB2 and CICS, the subsystems prevent concurrent use of data until the program declares a point of consistency. For a detailed description of how data remains consistent between systems, see Multiple system consistency on page 459.
296
Administration Guide
If the control has not been bypassed, DB2 returns no rows and thereby confirms that the contents of the view are valid. You can also use SQL statements to get information from the DB2 catalog about referential constraints that exist. For several examples, see DB2 SQL Reference. End of General-use Programming Interface #
297
See DB2 Utility Guide and Reference for more information about the CHECK utility. # #
298
Administration Guide
You can incorporate these integrity reports into application programs, or you can use them separately as part of an interface. The integrity report records the incident in a history file and writes a message to the operator's console, a database administrator's TSO terminal, or a dedicated printer for certain codes. The recorded information includes: v Date v Time v Authorization ID v Terminal ID or job name v Application v Affected view or affected table v Error code v Error description Review reports frequently by time and by authorization ID. For utilities: When a DB2 utility reorganizes or reconstructs data in the database, it produces statistics to verify record counts and to report errors. The LOAD and REORG utilities produce data record counts and index counts to verify that no records were lost. In addition to that, keep a history log of any DB2 utility that updates data, particularly REPAIR. Regularly produce and review these reports, which you can obtain through SMF customized reporting or a user-developed program.
299
300
Administration Guide
301
Example: To create a view of employee data for every employee that reports to a manager, the Spiffy security planners perform the following steps: 1. Add a column that contains manager IDs to DSN8810.DEPT, as shown in the following statement:
ALTER TABLE DSN8810.DEPT ADD MGRID CHAR(8) FOR SBCS DATA NOT NULL WITH DEFAULT;
2. Create a view that selects employee information about employees that work for a given manager, as shown in the following statement:
CREATE VIEW DEPTMGR AS SELECT * FROM DSN8810.EMP, DSN8810.DEPT WHERE WORKDEPT = DEPTNO AND MGRID = USER;
3. Ensure that every manager has the SELECT privilege on the view. See Granting the SELECT privilege to managers for information about ensuring that every manager has the appropriate SELECT privilege.
302
Administration Guide
The value of V for SECURITY_IN indicates that incoming remote connections must include verification. The value of N for ENCRYPTPSWDS indicates that passwords are not in internal RACF encrypted format. The security plan treats all remote locations alike, so it does not require encrypted passwords. The option to require encrypted passwords is available only between two DB2 subsystems that use SNA connections. v For TCP/IP connections, the Spiffy security planners must set the TCP/IP ALREADY VERIFIED field of installation panel DSNTIP5 to NO. This setting ensures that the incoming requests that use TCP/IP are not accepted without authentication. v The Spiffy security planners must grant all privileges and authorities that are required by the manager of Department D11 to the ID, MGRD11. The security planners must grant similar privileges to IDs that correspond to the remaining managers.
Chapter 14. A sample security plan for employee data
303
The value of O for USERNAMES indicates that translation checking is performed on outbound IDs, but not on inbound IDs. The value of R for SECURITY_OUT indicates that outbound connection requests contain a user ID and a RACF PassTicket. v For TCP/IP connections, the Spiffy security planners must include an entry in table SYSIBM.IPNAMES for the LU name that is used by the central location. The content of the LUNAME column is used to generate RACF PassTickets. The entry must specify outbound ID translation for requests to that location. Example: Table 89 shows an entry in SYSIBM.IPNAMES for LUCENTRAL.
Table 89. The SYSIBM.IPNAMES table at the remote location LINKNAME LUCENTRAL USERNAMES O SECURITY_OUT IPADDR R central.vnet.ibm.com
v The Spiffy security planners must include entries in table SYSIBM.USERNAMES to translate outbound IDs. Example: Table 90 shows two entries in SYSIBM.USERNAMES.
Table 90. The SYSIBM.USERNAMES table at the remote location TYPE O O AUTHID MEL1234 blank LINKNAME LUCENTRAL LUCENTRAL NEWAUTHID MGRD11 CLERK
MEL1234 is translated to MGRD11 before it is sent to the LU that is specified in the LINKNAME column. All other IDs are translated to CLERK before they are sent to that LU. Exception: For a product other than DB2 UDB for z/OS, the actions at the remote location might be different. If you use a different product, check the documentation for that product. The remote product must satisfy the requirements that are imposed by the central subsystem.
304
Administration Guide
scans it for unusual patterns of access. A large number of accesses or an unusual pattern might reveal use of a manager's logon ID by an unauthorized employee.
The CHECK OPTION ensures that every row that is inserted or updated through the view conforms to the definition of the view. A second view, the PAYMGR view, gives Spiffy payroll managers access to any record, including records for the members of the payroll operations department. Example: The owner of the employee table uses the following statement to create the PAYMGR view:
CREATE VIEW PAYMGR AS SELECT EMPNO, FIRSTNME, MIDINT, LASTNAME, WORKDEPT, PHONENO, HIREDATE, JOB, EDLEVEL, SEX, BIRTHDATE FROM DSN8810.EMP WITH CHECK OPTION;
Neither PAYDEPT nor PAYMGR provides access to compensation amounts. When a row is inserted for a new employee, the compensation amounts remain null. An update process can change these values at a later time. The owner of the employee table creates, owns, and grants privileges on both views. For information about granting privileges on these views, see Granting privileges to payroll operations and payroll management on page 307.
305
except those for their own department. After they verify the prospective changes, the managers of payroll operations run an application program. The program reads the payroll update table and makes the corresponding changes to the employee table. Only the payroll update program has the privilege of updating job, salary, and bonus in the employee table. Spiffy Computer Company calculates commission amounts separately by using a complicated formula. The formula considers the employee's job, department, years of service with the company, and responsibilities for various projects. The formula is embedded in the commission program, which is run regularly to insert new commission amounts in the payroll update table. The plan owner must have the SELECT privilege on the employee table and other tables to run the commission program.
306
Administration Guide
This statement grants the privileges without the GRANT OPTION to keep members of payroll operations from granting privileges to other users. The payroll managers require different privileges and a different RACF group ID. The Spiffy security planners add a RACF group for payroll managers and name it PAYMGRS. The security planners associate the payroll managers primary IDs with the PAYMGRS secondary ID. Next, privileges on the PAYMGR view, the compensation application, and the payroll update application are granted to PAYMGRS. The payroll update application must have the appropriate privileges on the update table. Example: The following statement grants the SELECT, INSERT, UPDATE, and DELETE privileges on the PAYMGR view to the payroll managers group ID PAYMGRS:
GRANT SELECT, INSERT, UPDATE, DELETE ON PAYMGR TO PAYMGRS;
Example: The following statement grants the EXECUTE privilege on the compensation application:
GRANT EXECUTE ON PLAN COMPENS TO PAYMGRS;
307
308
Administration Guide
tasks and only for relatively short periods. They also know that the privileges that are associated with the SYSADM authority give an ID control over all of the data in a subsystem. To limit the number of users with SYSADM authority, the Spiffy security plan grants the authority to DB2OWNER, the ID that is responsible for DB2 security. That does not mean that only IDs that are connected to DB2OWNER can exercise privileges that are associated with SYSADM authority. Instead, DB2OWNER can grant privileges to a group, connect other IDs to the group as needed, and later disconnect them. The Spiffy security planners prefer to have multiple IDs with SYSCTRL authority instead of multiple IDs with SYSADM authority. IDs with SYSCTRL authority can exercise most of the SYSADM privileges and can assume much of the day-to-day work. IDs with SYSCTRL authority cannot access data directly or run plans unless the privileges for those actions are explicitly granted to them. However, they can run utilities, examine the output data sets, and grant privileges that allow other IDs to access data. Therefore, IDs with SYSCTRL authority can access some sensitive data, but they cannot easily access the data. As part of the Spiffy security plan, DB2OWNER grants SYSCTRL authority to selected IDs. The Spiffy security planners also use RACF group IDs or secondary IDs to relieve the need to have SYSADM authority continuously available. SYSADM grants the necessary privileges to a RACF group or secondary ID. IDs that belong to the group can then bind plans and packages on their own behalf.
309
310
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | |
Architecture of the administrative task scheduler The lifecycle of the administrative task scheduler . . . . . . . . . . . . . . Scheduler task lists . . . . . . . . . . Architecture of the administrative task scheduler in a data sharing environment . . . Security guidelines for the administrative task scheduler . . . . . . . . . . . . . . . User roles in the administrative task scheduler Protection of the interface of the administrative task scheduler . . . . . . . . . . . . Protection of the resources of the administrative task scheduler . . . . . . . . . . . . Secure execution of tasks in the administrative task scheduler . . . . . . . . . . . . Execution of scheduled tasks in the administrative task scheduler . . . . . . . . . . . . . Multi-threading in the administrative task scheduler . . . . . . . . . . . . . . Scheduled execution of a stored procedure . . How the administrative task scheduler works with Unicode . . . . . . . . . . . . Scheduled execution of a JCL job . . . . . . Execution of scheduled tasks in a data sharing environment. . . . . . . . . . . . . Chapter 17. Monitoring and controlling DB2 and its connections . . . . . . . . . . . . Controlling DB2 databases and buffer pools . . . Starting databases . . . . . . . . . . . Starting an object with a specific status . . . Starting a table space or index space that has restrictions . . . . . . . . . . . . Monitoring databases. . . . . . . . . . Obtaining information about application programs . . . . . . . . . . . . . Obtaining information about pages in error Stopping databases . . . . . . . . . . Altering buffer pools . . . . . . . . . . Monitoring buffer pools . . . . . . . . . Controlling user-defined functions . . . . . . Starting user-defined functions . . . . . . Monitoring user-defined functions . . . . . Stopping user-defined functions . . . . . . Controlling DB2 utilities . . . . . . . . . . Starting online utilities . . . . . . . . . Monitoring online utilities . . . . . . . . Stand-alone utilities . . . . . . . . . . Controlling the IRLM. . . . . . . . . . . Starting the IRLM . . . . . . . . . . . Modifying the IRLM . . . . . . . . . . Monitoring the IRLM connection . . . . . . Stopping the IRLM . . . . . . . . . . Monitoring threads . . . . . . . . . . . Controlling TSO connections . . . . . . . . Connecting to DB2 from TSO . . . . . . .
355 356 357 358 359 359 360 360 361 362 362 363 364 364 365
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Chapter 16. Scheduling administrative tasks Interacting with the administrative task scheduler Adding a task . . . . . . . . . . . . Scheduling capabilities of the administrative task scheduler . . . . . . . . . . . Defining task schedules . . . . . . . . Choosing an administrative task scheduler in a data sharing environment . . . . . . ADMIN_TASK_ADD . . . . . . . . . UNIX cron format . . . . . . . . . . Listing scheduled tasks . . . . . . . . . Listing the last execution status of scheduled tasks . . . . . . . . . . . . . . . Removing a scheduled task. . . . . . . . ADMIN_TASK_REMOVE . . . . . . . Manually starting the administrative task scheduler . . . . . . . . . . . . . . Manually stopping the administrative task scheduler . . . . . . . . . . . . . . Synchronization between administrative task schedulers in a data sharing environment . . . Troubleshooting the administrative task scheduler . . . . . . . . . . . . . . Enabling tracing for administrative task scheduler problem determination. . . . . Recovering the administrative task scheduler task list . . . . . . . . . . . . . Problem executing a task . . . . . . . Problem in user-defined table functions . . Problem in stored procedures . . . . . .
Copyright IBM Corp. 1982, 2009
335 335 335 335 336 339 339 346 347 347 348 349 351 351 351 352 352 352 353 354 354
367 367 368 368 369 369 372 373 375 376 377 377 377 378 378 379 379 379 380 380 381 381 382 382 383 384 385
311
| |
Monitoring TSO and CAF connections . . . . Disconnecting from DB2 while under TSO. . . Controlling CICS connections . . . . . . . . Connecting from CICS . . . . . . . . . Restarting CICS . . . . . . . . . . Displaying indoubt units of recovery . . . Recovering indoubt units of recovery manually . . . . . . . . . . . . . Displaying postponed units of recovery . . Controlling CICS connections . . . . . . . Defining CICS threads . . . . . . . . Monitoring the threads . . . . . . . . Disconnecting applications . . . . . . . Disconnecting from CICS . . . . . . . . Orderly termination . . . . . . . . . Forced termination . . . . . . . . . Controlling IMS connections . . . . . . . . Connecting to the IMS control region . . . . Thread attachment . . . . . . . . . Thread termination . . . . . . . . . Displaying indoubt units of recovery . . . Recovering indoubt units of recovery . . . Displaying postponed units of recovery . . Duplicate correlation IDs . . . . . . . Resolving residual recovery entries . . . . Controlling IMS dependent region connections Connecting from dependent regions . . . . Monitoring the activity on connections . . . Disconnecting from dependent regions . . . Disconnecting from IMS . . . . . . . . . Controlling RRS connections . . . . . . . . Connecting to RRS using RRSAF . . . . . . Restarting DB2 and RRS . . . . . . . . Displaying indoubt units of recovery . . . Recovering indoubt units of recovery manually . . . . . . . . . . . . . Displaying postponed units of recovery . . Monitoring RRSAF connections . . . . . . Displaying RRSAF connections . . . . . Disconnecting RRSAF applications from DB2 Controlling connections to remote systems . . . Starting the DDF . . . . . . . . . . . Suspending and resuming DDF server activity Monitoring connections to other systems . . . The command DISPLAY DDF . . . . . . The command DISPLAY LOCATION . . . The command DISPLAY THREAD . . . . Canceling dynamic SQL from a client application . . . . . . . . . . . . The command CANCEL THREAD . . . . Using VTAM commands to cancel threads Monitoring and controlling stored procedures Displaying information about stored procedures and their environment . . . . Refreshing the environment for stored procedures or user-defined functions . . . Obtaining diagnostic information about stored procedures . . . . . . . . . . Using NetView to monitor errors . . . . . . Stopping the DDF . . . . . . . . . . . Controlling traces . . . . . . . . . . . .
385 387 387 388 388 389 389 389 389 390 390 390 391 391 391 392 392 393 394 394 395 395 396 396 397 397 398 400 400 401 401 402 402 402 403 403 403 404 404 405 405 406 406 407 409 414 415 416 417 417 419 419 420 421 422
Controlling the DB2 trace . . . . . . . Diagnostic traces for attachment facilities . . Diagnostic trace for the IRLM . . . . . . Controlling the resource limit facility (governor) Changing subsystem parameter values . . . .
Chapter 18. Managing the log and the bootstrap data set . . . . . . . . . . . . . . . How database changes are made . . . . . . . Units of recovery . . . . . . . . . . . Rolling back work . . . . . . . . . . . Establishing the logging environment . . . . . Creation of log records . . . . . . . . . Retrieval of log records . . . . . . . . . Writing the active log. . . . . . . . . . Writing the archive log (offloading) . . . . . Triggering offload . . . . . . . . . . The offloading process . . . . . . . . Archive log data sets . . . . . . . . . Controlling the log . . . . . . . . . . . Archiving the log . . . . . . . . . . . Dynamically changing the checkpoint frequency Monitoring the system checkpoint . . . . . Setting limits for archive log tape units . . . . Displaying log information . . . . . . . . Resetting the log RBA . . . . . . . . . . Log RBA range . . . . . . . . . . . . Resetting the log RBA value in a data sharing environment. . . . . . . . . . . . . Resetting the log RBA value in a non-data sharing environment . . . . . . . . . . Managing the bootstrap data set (BSDS) . . . . BSDS copies with archive log data sets . . . . Changing the BSDS log inventory . . . . . Discarding archive log records. . . . . . . . Deleting archive logs automatically . . . . . Locating archive log data sets . . . . . . . Chapter 19. Restarting DB2 after termination Termination . . . . . . . . . . . . . Normal termination . . . . . . . . . Abends . . . . . . . . . . . . . Normal restart and recovery . . . . . . . Phase 1: Log initialization . . . . . . . Phase 2: Current status rebuild . . . . . Phase 3: Forward log recovery. . . . . . Phase 4: Backward log recovery . . . . . Restarting automatically . . . . . . . . Deferring restart processing . . . . . . . Restarting with conditions . . . . . . . . Resolving postponed units of recovery . . . Errors encountered during RECOVER POSTPONED processing . . . . . . Output from RECOVER POSTPONED processing . . . . . . . . . . . Choosing recovery operations for conditional restart . . . . . . . . . . . . . . Conditional restart records . . . . . . . Chapter 20. Maintaining consistency across multiple systems . . . . . . . . . .
427 427 427 428 429 429 429 430 430 431 431 433 435 435 437 438 438 438 438 439 439 440 441 442 442 443 443 444 447 447 447 448 448 449 450 451 452 453 454 455 456
. . . . . . . . . . . .
. 459
312
Administration Guide
| |
Multiple system consistency . . . . . . . The two-phase commit process . . . . . Illustration of two-phase commit . . . . . Maintaining consistency after termination or failure . . . . . . . . . . . . . . Termination for multiple systems . . . . . Normal restart and recovery for multiple systems . . . . . . . . . . . . . Phase 1: Log initialization . . . . . . Phase 2: Current status rebuild . . . . Phase 3: Forward log recovery. . . . . Phase 4: Backward log recovery . . . . Restarting multiple systems with conditions . Resolving indoubt units of recovery . . . . . Resolution of IMS indoubt units of recovery . Resolution of CICS indoubt units of recovery Resolution of WebSphere Application Server indoubt units of recovery . . . . . . . Resolution of remote DBMS indoubt units of recovery . . . . . . . . . . . . . Making heuristic decisions . . . . . . Methods for determining the coordinators commit or abort decision . . . . . . Displaying information about indoubt threads . . . . . . . . . . . . Recovering indoubt threads . . . . . Resetting the status of an indoubt thread . Resolution of RRS indoubt units of recovery . Consistency across more than two systems . . Commit coordinator and multiple participants Illustration of multi-site update . . . . .
. 459 . 459 . 460 . 461 . 462 . . . . . . . . 462 462 462 462 463 463 463 464 465
. 465 . 467 . 468 . 468 . . . . . 468 469 469 470 471 471 . 472
Chapter 21. Backing up and recovering databases . . . . . . . . . . . . . . Planning for backup and recovery . . . . . . Considerations for recovering distributed data Extended recovery facility (XRF) toleration . . Considerations for recovering indexes . . . . Preparing for recovery . . . . . . . . . Events that occur during recovery . . . . . Complete recovery cycles . . . . . . . A recovery cycle example . . . . . . . How DFSMShsm affects your recovery environment. . . . . . . . . . . . Maximizing data availability during backup and recovery . . . . . . . . . . . . . . How to find recovery information . . . . . Where recovery information resides . . . . Reporting recovery information . . . . . Preparing to recover to a prior point of consistency . . . . . . . . . . . . . Step 1: Resetting exception status. . . . . Step 2: Copying the data . . . . . . . Step 3: Establishing a point of consistency Preparing to recover the entire DB2 subsystem to a prior point-in-time . . . . . . . . . Preparing for disaster recovery . . . . . . System-wide points of consistency . . . . Essential disaster recovery elements . . . . Ensuring more effective recovery from inconsistency . . . . . . . . . . . .
475 475 476 476 477 477 478 479 480 481 481 484 484 484 485 485 485 486 486 487 488 488 490
| | | | | |
Actions to take . . . . . . . . . . . Actions to avoid . . . . . . . . . . Running the RECOVER utility in parallel . . . Using fast log apply during RECOVER . . . . Reading the log without RECOVER . . . . . Copying page sets and data sets . . . . . . . Backing up with DFSMS . . . . . . . . Backing up with RVA storage control or Enterprise Storage Server . . . . . . . . Recovering page sets and data sets . . . . . . Recovering the work file database . . . . . Problems with the user-defined work file data set . . . . . . . . . . . . . . . . Problems with DB2-managed work file data sets Recovering error ranges for a work file table space . . . . . . . . . . . . . . . Recovering the catalog and directory . . . . . Recovering data to a prior point of consistency . . Considerations for recovering to a prior point of consistency . . . . . . . . . . . . . Recovering table spaces . . . . . . . . Recovering tables that contain identity columns . . . . . . . . . . . . . Recovering indexes . . . . . . . . . Recovering catalog and directory tables . . Using RECOVER to restore data to a previous point-in-time . . . . . . . . . . . . Using the TOCOPY, TOLASTCOPY, and TOLASTFULLCOPY options . . . . . . Using the TOLOGPOINT option . . . . . Planning for point-in-time recovery . . . . Ensuring consistency . . . . . . . . . Restoring data by using DSN1COPY . . . Backing up and restoring data with non-DB2 dump and restore . . . . . . . . . . Recovery of dropped objects . . . . . . . Avoiding accidentally dropping objects . . . Procedures for recovery . . . . . . . . Recovery of an accidentally dropped table Recovery of an accidentally dropped table space . . . . . . . . . . . . . . Discarding SYSCOPY and SYSLGRNX records System-level point-in-time recovery . . . . . System-level backup and restore . . . . . Recovery to a given point-in-time . . . . Recovery to the point-in-time of a backup Remote site recovery from a disaster at a local site . . . . . . . . . . . . .
490 491 492 493 493 493 495 495 495 497 497 497 497 498 499 499 499 500 501 502 504 504 505 505 506 508 508 509 509 509 509 511 514 515 515 517 518 519
Chapter 22. Recovery scenarios . . . . . . 521 IRLM failure recovery . . . . . . . . . . 521 z/OS or power failure recovery . . . . . . . 522 Disk failure recovery . . . . . . . . . . . 522 Application error recovery . . . . . . . . . 524 IMS-related failure recovery . . . . . . . . 525 IMS control region (CTL) failure recovery . . . 526 Resolution of indoubt units of recovery. . . . 526 Problem 1 . . . . . . . . . . . . 526 Problem 2 . . . . . . . . . . . . 527 IMS application failure recovery . . . . . . 528 Problem 1 . . . . . . . . . . . . 528
Part 4. Operation and recovery
313
Problem 2 . . . . . . . . . . . . CICS-related failure recovery . . . . . . . . CICS application failure recovery . . . . . . Recovery when CICS is not operational . . . Recovery when CICS cannot connect to DB2 Manually recovering CICS indoubt units of recovery . . . . . . . . . . . . . . CICS attachment facility failure recovery . . . Subsystem termination recovery . . . . . . . Resource failure recovery . . . . . . . . . Active log failure recovery . . . . . . . . Problem 1 - Out of space in active logs . . . Problem 2 - Write I/O error on active log data set . . . . . . . . . . . . . Problem 3 - Dual logging is lost . . . . . Problem 4 - I/O errors while reading the active log. . . . . . . . . . . . . Archive log failure recovery . . . . . . . Problem 1 - Allocation problems . . . . . Problem 2 - Write I/O errors during archive log offload . . . . . . . . . . . . Problem 3 - Read I/O errors on archive data set during recover . . . . . . . . . . Problem 4 - Insufficient disk space for offload processing . . . . . . . . . . . . Temporary resource failure recovery . . . . . BSDS failure recovery . . . . . . . . . Problem 1 - An I/O error occurs . . . . . Problem 2 - An error occurs while opening Problem 3 - Unequal timestamps exist . . . Recovering the BSDS from a backup copy . . . DB2 database failures. . . . . . . . . . . # Recovering a DB2 subsystem to a prior point in # time . . . . . . . . . . . . . . . . Recovery from down-level page sets. . . . . . Procedure for recovering invalid LOBs . . . . . Table space input/output error recovery . . . . DB2 catalog or directory input/output errors . . . Integrated catalog facility failure recovery . . . . Recovery when the VSAM volume data set (VVDS) is destroyed . . . . . . . . . . Out of disk space or extent limit recovery . . . Violation of referential constraint recovery. . . . Distributed data facility failure recovery . . . . Conversation failure recovery . . . . . . . Communications database failure recovery . . Problem 1 . . . . . . . . . . . . Problem 2 . . . . . . . . . . . . Database access thread failure recovery. . . . VTAM failure recovery . . . . . . . . . TCP/IP failure recovery . . . . . . . . . Remote logical unit failure recovery . . . . . Indefinite wait condition recovery . . . . . Security failure recovery for database access threads . . . . . . . . . . . . . . Remote site recovery from a disaster at the local site . . . . . . . . . . . . . . . . . Restoring data from image copies and archive logs . . . . . . . . . . . . . . . Using a tracker site for disaster recovery . . . Characteristics of a tracker site . . . . .
528 529 529 529 530 531 534 534 535 535 536 537 537 537 539 539 540 540 540 541 542 542 542 543 543 546 547 548 550 550 551 552 552 554 558 558 559 559 560 560 560 561 561 561 562 562 563 563 574 574
| |
Setting up a tracker site . . . . . . . . Using RESTORE SYSTEM LOGONLY to establish a recovery cycle . . . . . . . Using the RECOVER utility to establish a recovery cycle . . . . . . . . . . . Media failures during LOGONLY recovery Maintaining the tracker site . . . . . . Making the tracker site the takeover site . . Using data mirroring . . . . . . . . . . The rolling disaster . . . . . . . . . Consistency groups . . . . . . . . . Recovering in a data mirroring environment Recovering with Extended Remote Copy . . Resolving indoubt threads . . . . . . . . . Description of the recovery environment . . . Configuration . . . . . . . . . . . Applications. . . . . . . . . . . . Threads . . . . . . . . . . . . . Communication failure recovery . . . . . . Making a heuristic decision . . . . . . . Recovery from an IMS outage that results in an IMS cold start . . . . . . . . . . . . Recovery from a DB2 outage at a requester that results in a DB2 cold start . . . . . . . . Recovery from a DB2 outage at a server that results in a DB2 cold start . . . . . . . . Correcting a heuristic decision. . . . . . . Chapter 23. Recovery from BSDS or log failure during restart . . . . . . . . . . . . . Log initialization or current status rebuild failure recovery . . . . . . . . . . . . . . . Description of failure during log initialization Description of failure during current status rebuild . . . . . . . . . . . . . . Restart by truncating the log . . . . . . . Step 1: Find the log RBA after the inaccessible part of the log . . . . . . . Step 2: Identify lost work and inconsistent data . . . . . . . . . . . . . . Step 3: Determine what status information has been lost . . . . . . . . . . . Step 4: Truncate the log at the point of error Step 5: Start DB2 . . . . . . . . . . Step 6: Resolve data inconsistency problems Failure during forward log recovery . . . . . . Understanding forward log recovery failure . . Starting DB2 by limiting restart processing . . Step 1: Find the log RBA after the inaccessible part of the log . . . . . . . Step 2: Identify incomplete units of recovery and inconsistent page sets . . . . . . . Step 3: Restrict restart processing to the part of the log after the damage . . . . . . . Step 4: Start DB2 . . . . . . . . . . Step 5: Resolve inconsistent data problems Failure during backward log recovery . . . . . Understanding backward log recovery failure Bypassing backout before restarting . . . . . Failure during a log RBA read request . . . . .
575 575 577 580 580 580 582 582 584 585 587 588 588 588 588 588 589 590 591 592 595 595
597 599 600 601 601 601 604 607 607 608 608 608 609 609 610 612 613 613 613 613 614 614 615
314
Administration Guide
Unresolvable BSDS or log data set problem during restart . . . . . . . . . . . . . . . . Preparing for recovery or restart . . . . . . Performing fall back to a prior shutdown point Failure resulting from total or excessive loss of log data . . . . . . . . . . . . . . . . Total loss of the log . . . . . . . . . . Excessive loss of data in the active log . . . . Resolving inconsistencies resulting from a conditional restart . . . . . . . . . . . . Inconsistencies in a distributed environment . . Procedures for resolving inconsistencies . . . Method 1. Recover to a prior point of consistency . . . . . . . . . . . . . Method 2. Re-create the table space . . . . . Method 3. Use the REPAIR utility on the data
616 617 617 619 619 620 622 622 622 623 624 624
315
316
Administration Guide
Entering commands
You can control most of the operational environment by using DB2 commands. You might need to use other types of commands, including: v IMS commands that control IMS connections v CICS commands that control CICS connections
Copyright IBM Corp. 1982, 2009
317
v IMS and CICS commands that allow you to start and stop connections to DB2 and display activity on the connections v z/OS commands that allow you to start, stop, and change the internal resource lock manager (IRLM) Using these commands is described in Chapter 17, Monitoring and controlling DB2 and its connections, on page 367. For a full description of the commands available, see Chapter 2 of DB2 Command Reference.
318
Administration Guide
DISPLAY LOG Displays the current checkpoint frequency (CHKFREQ) value, information about the current active log data sets, and the status of the offload task. DISPLAY PROCEDURE Displays statistics about stored procedures accessed by DB2 applications. DISPLAY RLIMIT Displays the status of the resource limit facility (governor). DISPLAY THREAD Displays information about DB2, distributed subsystem connections, and parallel tasks. DISPLAY TRACE Displays the status of DB2 traces. DISPLAY UTILITY Displays the status of a utility. MODIFY TRACE Changes the trace events (IFCIDs) being traced for a specified active trace. RECOVER BSDS Reestablishes dual bootstrap data sets. RECOVER INDOUBT Recovers threads left indoubt after DB2 is restarted. RECOVER POSTPONED Completes backout processing for units of recovery (URs) whose backout was postponed during an earlier restart, or cancels backout processing of the postponed URs if the CANCEL option is used. RESET INDOUBT Purges DB2 information about indoubt threads. SET ARCHIVE Controls or sets the limits for the allocation and the deallocation time of the tape units for archive log processing. SET LOG Modifies the checkpoint frequency (CHKFREQ) value dynamically without changing the value in the subsystem parameter load module. SET SYSPARM Loads the subsystem parameter module specified in the command. START DATABASE Starts a list of databases or table spaces and index spaces. START DB2 Initializes the DB2 subsystem. START DDF Starts the distributed data facility. START FUNCTION SPECIFIC Activates an external function that is stopped. START PROCEDURE Starts a stored procedure that is stopped. START RLIMIT Starts the resource limit facility (governor).
Chapter 15. Basic operation
319
START TRACE Starts DB2 traces. STOP DATABASE Stops a list of databases or table spaces and index spaces. STOP DB2 Stops the DB2 subsystem. STOP DDF Stops or suspends the distributed data facility. STOP FUNCTION SPECIFIC Prevents DB2 from accepting SQL statements with invocations of the specified functions. STOP PROCEDURE Prevents DB2 from accepting SQL CALL statements for a stored procedure. STOP RLIMIT Stops the resource limit facility (governor). STOP TRACE Stops traces. TERM UTILITY Terminates execution of a utility.
320
Administration Guide
Recommendation: Use the same character for the CRC and the command prefix for a single DB2 subsystem. You need to use a command prefix of one character or you cannot match these identifiers. The examples in this book assume that both the command prefix and the CRC are the hyphen (-) . However, if you can attach to more than one DB2 subsystem, you must issue your commands using the appropriate CRC. In the following example, the CRC is a question mark character: You enter:
/SSR ?DISPLAY THREAD
From a CICS terminal: You can enter all DB2 commands except START DB2 from a CICS terminal authorized to enter the DSNC transaction code. For example, you enter:
DSNC -DISPLAY THREAD
CICS can attach to only one DB2 subsystem at a time; therefore CICS does not use the DB2 command prefix. Instead, each command entered through the CICS attachment facility must be preceded by a hyphen (-), as in previous the example. The CICS attachment facility routes the commands to the connected DB2 subsystem and obtains the command responses. From a TSO terminal: You can enter all DB2 commands except START DB2 from a DSN session. Example: The system displays:
READY
You enter:
DSN SYSTEM (subsystem-name)
You enter:
-DISPLAY THREAD
321
A TSO session can attach to only one DB2 subsystem at a time; therefore TSO does not use the DB2 command prefix. Instead, each command that is entered through the TSO attachment facility must be preceded by a hyphen (-), as the preceding example demonstrates. The TSO attachment facility routes the command to DB2 and obtains the command response. All DB2 commands except START DB2 can also be entered from a DB2I panel using option 7, DB2 Commands. For more information about using DB2I, see Using DB2I (DB2 Interactive) on page 327. # # # # # # # # # From an APF-authorized program: As with IMS, DB2 commands (including START DB2) can be passed from an APF-authorized program to multiple DB2 subsystems by the MGCRE (SVC 34) z/OS service. Thus, the value of the command prefix identifies the particular subsystem to which the command is directed. The subsystem command prefix is specified, as in IMS, when DB2 is installed (in the SYS1.PARMLIB member IEFSSNxx). DB2 supports the z/OS WTO Command And Response Token (CART) to route individual DB2 command response messages back to the invoking application program. Use of the CART token is necessary if multiple DB2 commands are issued from a single application program. For example, to issue DISPLAY THREAD to the default DB2 subsystem from an APF-authorized program run as a batch job, code:
MODESUPV DS 0H MODESET MODE=SUP,KEY=ZERO SVC34 SR 0,0 MGCRE CMDPARM EJECT CMDPARM DS 0F CMDFLG1 DC X'00' CMDLENG DC AL1(CMDEND-CMDPARM) CMDFLG2 DC X'0000' CMDDATA DC C'-DISPLAY THREAD' CMDEND DS 0C
From an IFI application program: An application program can issue DB2 commands using the instrumentation facility interface (IFI). The IFI application program protocols are available through the IMS, CICS, TSO, and call attachment facility (CAF) attaches, and the Resource Recovery Services attachment facility. For an example in which the DB2 START TRACE command for monitor class 1 is issued, see COMMAND: Syntax and usage with IFI on page 1160.
322
Administration Guide
If a DB2 command is entered from an IMS or CICS terminal, the response messages can be directed to different terminals. If the response includes more than one message, the following cases are possible: v If the messages are issued in a set, the entire set of messages is sent to the IMS or CICS terminal that entered the command. For example, DISPLAY THREAD issues a set of messages. v If the messages are issued one after another, and not in a set, only the first message is sent to the terminal that entered the command. Later messages are routed to one or more z/OS consoles via the WTO function. For example, START DATABASE issues several messages one after another. You can choose alternate consoles to receive the subsequent messages by assigning them the routing codes placed in the DSNZPxxx module when DB2 is installed. If you want to have all of the messages available to the person who sent the command, route the output to a console near the IMS or CICS master terminal. For APF-authorized programs that run in batch jobs, command responses are returned to the master console and to the system log if hard copy logging is available. Hard copy logging is controlled by the z/OS system command VARY. See z/OS MVS System Commands for more information.
323
# # # #
APF-authorized programs that issue commands through MGCRE (SVC 34) have SYSOPR authority unless DB2 can determine the programs RACF user ID. In that case, DB2 will use that user ID for authorization. To avoid errors, the user should obtain SYSOPR authority for those DB2 instances. The authority to start or stop any particular database must be specifically granted to an ID with SYSOPR authority. Likewise, an ID with SYSOPR authority must be granted specific authority to issue the RECOVER BSDS and ARCHIVE LOG commands. The SQL GRANT statement can be used to grant SYSOPR authority to other user IDs such as the /SIGN user ID or the LTERM of the IMS master terminal. For information about other DB2 authorization levels, see Establishing RACF protection for DB2 on page 264. DB2 Command Reference also has authorization level information for specific commands.
Starting DB2
When installed, DB2 is defined as a formal z/OS subsystem. Afterward, the following message appears during any IPL of z/OS:
DSN3100I - DSN3UR00 - SUBSYSTEM ssnm READY FOR -START COMMAND
where ssnm is the DB2 subsystem name. At that point, you can start DB2 from a z/OS console that has been authorized to issue system control commands (z/OS command group SYS), by entering the command START DB2. The command must be entered from the authorized console and not submitted through JES or TSO. It is not possible to start DB2 by a JES batch job or a z/OS START command. The attempt is likely to start an address space for DB2 that then abends, probably with reason code X'00E8000F'. # # You can also start DB2 from an APF-authorized program by passing a START DB2 command to the MGCRE (SVC 34) z/OS service.
Messages at start
The system responds with some or all of the following messages depending on which parameters you chose:
$HASP373 xxxxMSTR STARTED DSNZ002I - SUBSYS ssnm SYSTEM PARAMETERS LOAD MODULE NAME IS dsnzparm-name DSNY001I - SUBSYSTEM STARTING DSNJ127I - SYSTEM TIMESTAMP FOR BSDS=87.267 14:24:30.6 DSNJ001I - csect CURRENT COPY n ACTIVE LOG DATA SET IS DSNAME=..., STARTRBA=...,ENDRBA=... DSNJ099I - LOG RECORDING TO COMMENCE WITH STARTRBA = xxxxxxxxxxxx
324
Administration Guide
$HASP373 DSNR001I DSNR003I DSNR004I DSNR005I DSNR006I DSNR002I DSN9002I DSNV434I DSN9022I
xxxxDBM1 STARTED - RESTART INITIATED - RESTART...PRIOR CHECKPOINT RBA=xxxxxxxxxxxx - RESTART...UR STATUS COUNTS... IN COMMIT=nnnn, INDOUBT=nnnn, INFLIGHT=nnnn, IN ABORT=nnnn, POSTPONED ABORT=nnnn - RESTART...COUNTS AFTER FORWARD RECOVERY IN COMMIT=nnnn, INDOUBT=nnnn - RESTART...COUNTS AFTER BACKWARD RECOVERY INFLIGHT=nnnn, IN ABORT=nnnn, POSTPONED ABORT=nnnn - RESTART COMPLETED -DSNYASCP 'START DB2' NORMAL COMPLETION - DSNVRP NO POSTPONED ABORT THREADS FOUND - DSNVRP 'RECOVER POSTPONED' NORMAL COMPLETION
If any of the nnnn values in message DSNR004I are not zero, message DSNR007I is issued to provide the restart status table. The START DB2 command starts the system services address space, the database services address space, and, depending upon specifications in the load module for subsystem parameters (DSNZPARM by default), the distributed data facility address space and the DB2-established stored procedures address space. Optionally, another address space, the internal resource lock manager (IRLM), can be started automatically.
Options at start
Starting invokes the load module for subsystem parameters. This load module contains information specified when DB2 was installed. For example, the module contains the name of the IRLM to connect to. In addition, it indicates whether the distributed data facility (DDF) is available and, if it is, whether it should be automatically started when DB2 is started. For information about using a command to start DDF, see Starting the DDF on page 405. You can specify PARM (module-name) on the START DB2 command to provide a parameter module other than the one specified at installation. There is a conditional restart operation, but there are no parameters to indicate normal or conditional restart on the START DB2 command. For information about conditional restart, see Restarting with conditions on page 455.
325
space from the console. After DB2 stops, check the start procedures of all three DB2 address spaces for correct JCL syntax. See Data Sharing: Planning and Administration for more information. To accomplish the check, compare the expanded JCL in the SYSOUT output against the correct JCL provided in z/OS MVS JCL User's Guide or z/OS MVS JCL Reference. Then, take the member name of the erroneous JCL procedure also provided in the SYSOUT to the system programmer who maintains your procedure libraries. After finding out which proclib contains the JCL in question, locate the procedure and correct it.
Stopping DB2
Before stopping, all DB2-related write to operator with reply (WTOR) messages must receive replies. Then one of the following commands terminates the subsystem:
-STOP DB2 MODE(QUIESCE) -STOP DB2 MODE(FORCE)
For the effects of the QUIESCE and FORCE options, see Normal termination on page 447. In a data sharing environment, see Data Sharing: Planning and Administration. The following messages are returned:
DSNY002I - SUBSYSTEM STOPPING DSN9022I - DSNYASCP '-STOP DB2' NORMAL COMPLETION DSN3104I - DSN3EC00 - TERMINATION COMPLETE
Before DB2 can be restarted, the following message must also be returned to the z/OS console that is authorized to enter the START DB2 command:
DSN3100I - DSN3EC00 - SUBSYSTEM ssnm READY FOR -START COMMAND
326
Administration Guide
If the STOP DB2 command is not issued from a z/OS console, messages DSNY002I and DSN9022I are not sent to the IMS or CICS master terminal operator. They are routed only to the z/OS console that issued the START DB2 command.
Submitting work
An application program running under TSO, IMS, or CICS can make use of DB2 resources by executing embedded SQL statements. How to run application programs from those environments is explained under: Running TSO application programs on page 327 Running IMS application programs on page 328 Running CICS application programs on page 329 Running batch application programs on page 330 Running application programs using CAF on page 330 Running application programs using RRSAF on page 331 In each case, there are some conditions that the application program must meet to embed SQL statements and to authorize the use of DB2 resources and data. All application programming defaults, including the subsystem name that the programming attachments discussed here use, are in the DSNHDECP load module. Make sure your JCL specifies the proper set of program libraries.
The following example runs application program DSN8BC3. The program is in library prefix.RUNLIB.LOAD, the name assigned to the load module library.
DSN SYSTEM (subsystem-name) RUN PROGRAM (DSN8BC3) PLAN(DSN8BH81) LIB ('prefix.RUNLIB.LOAD') END
A TSO application program that you run in a DSN session must be link-edited with the TSO language interface program (DSNELI). The program cannot include IMS DL/I calls because that requires the IMS language interface module (DFSLI000).
327
The terminal monitor program (TMP) attaches the DB2-supplied DSN command processor, which in turn attaches the application program. The DSN command starts a DSN session, which in turn provides a variety of subcommands and other functions. The DSN subcommands are: ABEND Causes the DSN session to terminate with a DB2 X'04E' abend completion code and with a DB2 abend reason code of X'00C50101'. BIND PACKAGE Generates an application package. BIND PLAN Generates an application plan. DCLGEN Produces SQL and host language declarations. END Ends the DB2 connection and returns to TSO.
FREE PACKAGE Deletes a specific version of a package. FREE PLAN Deletes an application plan. REBIND PACKAGE Regenerates an existing package. REBIND PLAN Regenerates an existing plan. RUN Executes a user application program.
SPUFI Invokes a DB2I facility for executing SQL statements not embedded in an application program. You can also issue the following DB2 and TSO commands from a DSN session: v Any TSO command except TIME, TEST, FREE, and RUN. v Any DB2 command except START DB2. For a list of those commands, see DB2 operator commands on page 318. DB2 uses the following sources to find an authorization for access by the application program. DB2 checks the first source listed; if it is unavailable, it checks the second source, and so on. 1. RACF USER parameter supplied at logon 2. TSO logon user ID 3. Site-chosen default authorization ID 4. IBM-supplied default authorization ID Either the RACF USER parameter or the TSO user ID can be modified by a locally defined authorization exit routine.
328
Administration Guide
Application programs that contain SQL statements run in message processing program (MPP), batch message processing (BMP), Fast Path regions, or IMS batch regions. The program must be link-edited with the IMS language interface module (DFSLI000). It can write to and read from other database management systems using the distributed data facility, in addition to accessing DL/I and Fast Path resources. DB2 checks whether the authorization ID provided by IMS is valid. For message-driven regions, IMS uses the SIGNON-ID or LTERM as the authorization ID. For non-message-driven regions and batch regions, IMS uses the ASXBUSER field (if RACF or another security package is active). The ASXBUSER field is defined by z/OS as 7 characters. If the ASXBUSER field contains binary zeros or blanks (RACF or another security package is not active), IMS uses the PSB name instead. See Chapter 11, Controlling access to a DB2 subsystem, on page 231 for more information about DB2 authorization IDs. An IMS terminal operator probably notices few differences between application programs that access DB2 data and programs that access DL/I data because no messages relating to DB2 are sent to a terminal operator by IMS. However, your program can signal DB2 error conditions with a message of your choice. For example, at the programs first SQL statement, it receives an SQL error code if the resources to run the program are not available or if the operator is not authorized to use the resources. The program can interpret the code and issue an appropriate message to the operator. Running IMS batch work: You can run batch DL/I jobs to access DB2 resources; DB2-DL/I batch support uses the IMS attach package. See Part 5 of DB2 Application Programming and SQL Guide for more information about application programs and DL/I batch. See IMS Application Programming: Design Guide for more information about recovery and DL/I batch.
329
In the example, v IKJEFT01 identifies an entry point for TSO TMP invocation. Alternate entry points defined by TSO are also available to provide additional return code and ABEND termination processing options. These options permit the user to select the actions to be taken by the TMP upon completion of command or program execution. Because invocation of the TSO TMP using the IKJEFT01 entry point might not be suitable for all user environments, refer to the TSO publications to determine which TMP entry point provides the termination processing options best suited to your batch execution environment. v USER=SYSOPR identifies the user ID (SYSOPR in this case) for authorization checks. v DYNAMNBR=20 indicates the maximum number of data sets (20 in this case) that can be dynamically allocated concurrently. v z/OS checkpoint and restart facilities do not support the execution of SQL statements in batch programs invoked by RUN. If batch programs stop because of errors, DB2 backs out any changes made since the last commit point. For information about backup and recovery, see Chapter 21, Backing up and recovering databases, on page 475. For an explanation of backing out changes to data when a batch program run in the TSO background abends, see Part 5 of DB2 Application Programming and SQL Guide. v (ssid) is the subsystem name or group attachment name.
330
Administration Guide
IMS batch applications can also access DB2 databases through CAF, though this method does not coordinate the commitment of work between the IMS and DB2 systems. Using the DB2 DL/I batch support for IMS batch applications is highly recommended. To use CAF, you must first make available a load module known as the call attachment language interface or DSNALI. When the language interface is available, your program can use CAF to connect to DB2 in two ways: v Implicitly, by including SQL statements or IFI calls in your program just as you would any program v Explicitly, by writing CALL DSNALI statements For an explanation of CAFs capabilities and how to use it, see Part 6 of DB2 Application Programming and SQL Guide. End of General-use Programming Interface
Receiving messages
DB2 message identifiers have the form DSNcxxxt, where: DSN c Is the unique DB2 message prefix. Is a 1-character code identifying the DB2 subcomponent that issued the message. For example: 2 CICS attachment facility that is shipped with CICS M IMS attachment facility U Utilities
Chapter 15. Basic operation
331
xxx t
Is the message number Is the message type, with these values and meanings: A Immediate action D Immediate decision E Eventual action I Information only
See DB2 Messages for an expanded description of message types. A command prefix, identifying the DB2 subsystem, precedes the message identifier, except in messages from the CICS and IMS attachment facilities. (The CICS attachment facility issues messages in the form DSN2xxxt, and the IMS attachment facility issues messages in the form DSNMxxxt.) CICS and IMS attachment facility messages identify the z/OS subsystem that generated the message. The IMS attachment facility issues messages that are identified as SSNMxxxx and as DFSxxxx. The DFSxxxx messages are produced by IMS, under which the IMS attachment facility operates.
Yes
No
No
No
Yes2
No
Yes
No
332
Administration Guide
Table 91. Operational control summary (continued) Type of operation Receive IMS attachment facility unsolicited output Issue CICS commands Receive CICS attachment facility unsolicited output Notes: 1. Except START DB2. Commands issued from IMS must have the prefix /SSR. Commands issued from CICS must have the prefix DSNC. 2. Using outstanding WTOR. 3. Attachment facility unsolicited output does not include DB2 unsolicited output; for the latter, see Receiving unsolicited DB2 messages on page 332. 4. Use the z/OS command MODIFY jobname, CICS command. The z/OS console must already be defined as a CICS terminal. 5. Specify the output destination for the unsolicited output of the CICS attachment facility in the RCT. z/OS console No3 TSO terminal No IMS master terminal Yes Authorized CICS terminal No
Yes4 No3
No No
No No
Yes Yes5
333
334
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Adding a task
You can use the stored procedure ADMIN_TASK_ADD to define new scheduled tasks. The parameters that you use when you call the stored procedure define the schedule and the work for each task. The request and the parameters are transmitted to the administrative task scheduler associated with the DB2 subsystem where the stored procedure has been called. The parameters are checked and if they are valid, the task is added into the scheduler task lists with a unique task name. The task name and the return code are returned to the stored procedure for output. At the same time, the scheduler analyzes the task to schedule its next execution.
335
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Only one of these definitions can be specified for any single task. The other parameters must be null.
Table 92. Relationship of null and non-null values for scheduling parameters Parameter specified interval Required null parameters point-in-time trigger-task-name trigger-task-cond trigger-task-code interval trigger-task-name trigger-task-cond trigger-task-code interval point-in-time trigger-task-cond trigger-task-code interval point-in-time
point-in-time
trigger-task-name alone
If interval, point-in-time, trigger-task-name, trigger-task-cond, and trigger-task-code are all null, max-invocations must be set to 1. You can restrict scheduled executions either by defining a window of time during which execution is permitted or by specifying how many times a task can execute. Three parameters control restrictions: v begin-timestamp: earliest permitted execution time v end-timestamp: latest permitted execution time v max-invocations: maximum number of executions The begin-timestamp and end-timestamp parameters are timestamps that define a window of time during which tasks can start. Before and after this window, the task will not start even if the schedule parameters are met. If begin-timestamp is null, the window begins at the time when the task is added, and executions can start immediately. If end-timestamp is null, the window extends infinitely into the future, so that repetitive or triggered executions are not limited by time. Timestamps must either be null values or future times, and end-timestamp cannot be earlier than begin-timestamp. For repetitive or triggered tasks, the number of executions can be limited using the max-invocations parameter. In this case, the task executes no more than the number of times indicated by the parameter, even if the schedule and the window of time would require the task to be executed. Executions that are skipped because they overlap with previous executions that are still running are not counted toward max-invocations. The max-invocations parameter defines a limit but no requirement. If the task is executed fewer times than indicated during its execution window, the maximum number of executions will never be reached.
336
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The following ADMIN_TASK_ADD parameters provide control over when scheduled tasks execute: v interval v point-in-time v trigger-task-name v trigger-task-cond v trigger-task-code v begin-timestamp v end-timestamp v max-invocations To define a new scheduled task, connect to the DB2 subsystem with sufficient authorization to call the stored procedure ADMIN_TASK_ADD. The following task definitions show some common scheduling options.
To define A task that executes one time only: Do this Set max-invocations to 1. Optionally, provide a value for the begin-timestamp parameter to control when execution happens. Leave other parameters null. For example, if max-invocations is set to 1 and begin-timestamp is set to 2008-05-27-06.30.0, the task executes at 6:30 AM on May 27, 2008. With this definition, the task executes one time. If begin-timestamp has been provided, execution happens as soon as permitted. A regular repetitive execution: Set interval to the number of minutes that you want to pass between the start of one execution and the start of the next execution. Optionally, provide values for the max-invocations, begin-timestamp, and end-timestamp parameters to limit execution. Leave other parameters null. For example, if interval is set to 5 and begin-timestamp is set to 2008-05-27-06.30.0, the task executes at 6:30 AM on May 27, 2008, then again at 6:35, 6:40, and so forth. With this definition, the task executes every interval minutes, so long as the previous execution has finished. If the previous execution is still in progress, the new execution is postponed interval minutes. Execution continues to be postponed until the running task completes.
337
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Do this Set point-in-time to a valid UNIX cron format string. The string specifies a set of times. Optionally, provide values for the max-invocations, begin-timestamp and end-timestamp parameters to limit execution. Leave other parameters null. For example, if point-in-time is set to 0 22 * * 1,5, the task executes at 10:00 PM each Monday and Friday. With this definition, the task executes at each time specified, so long as the previous execution has finished. If the previous execution is still in progress, the new execution is skipped. Subsequent executions continue to be skipped until the running task completes.
An execution that is triggered when another Set trigger-task-name to the name of the task completes: triggering task. Optionally set trigger-task-cond and trigger-task-code to limit execution based on the result of the triggering task. The trigger-task-cond and trigger-task-code parameters must either both be null or both be non-null. Optionally, provide values for the max-invocations, begin-timestamp and end-timestamp parameters to limit execution. Leave other parameters null. For example, assume that a scheduled INSERT job has a task name of test_task. If trigger-task-name is test _task, trigger-task-cond is EQ, and trigger-task-code is 0, then this task executes when the INSERT job completes with a return code of 0. With this definition, the task executes at each time specified, so long as the previous execution has finished. If the previous execution is still in progress, the new execution is skipped. Subsequent executions continue to be skipped until the running task completes.
338
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Do this Set trigger-task-name to DB2START. Optionally, provide values for the max-invocations, begin-timestamp and end-timestamp parameters to limit execution. Leave other parameters null. For example, if trigger-task-name is DB2START, begin-timestamp is 2008-01-01-00.00.0, and end-timestamp is 2009-01-01-00.00.0, the task executes each time that DB2 starts during 2008. With this definition, the task executes at each DB2 start, so long as the previous execution has finished. If the previous execution is still in progress, the new execution is skipped. Subsequent executions continue to be skipped until the running task completes.
Set trigger-task-name to DB2STOP. Optionally, provide values for the max-invocations, begin-timestamp and end-timestamp parameters to limit execution. Leave other parameters null. With this definition, the task executes at each DB2 stop, so long as the previous execution has finished. If the previous execution is still in progress, the new execution is skipped. Subsequent executions continue to be skipped until the running task completes.
ADMIN_TASK_ADD
SYSPROC.ADMIN_TASK_ADD stored procedure adds a task to the scheduler task list.
Environment
ADMIN_TASK_ADD runs in a WLM-established stored procedure address space and uses the Resource Recovery Services attachment facility to connect to DB2.
339
| | | | | | | | | | | |
CALL
Authorization
Anyone who can execute this DB2 stored procedure is allowed to add a task.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
SYSPROC.ADMIN_TASK_ADD
user-ID NULL , ,
, password NULL
begin-timestamp NULL
end-timestamp NULL
max-invocations NULL
interval , NULL , NULL , NULL , NULL , NULL , point-in-time , NULL , NULL , NULL , NULL , NULL , trigger-task-name , trigger-task-cond , NULL , NULL , DB2-SSID NULL ,
trigger-task-code ,
| | | |
NULL
NULL
| | | | | | | | | | | | | | | | | | | | | | | | |
task-name NULL
description NULL
return-code
Option descriptions
user-ID Specifies the user ID under which the task execution is performed. If this parameter is set to NULL, task execution is performed with the default authorization ID associated with the administrative task scheduler instead. This is an input parameter of type VARCHAR(128). password Specifies the password associated with the input parameter user-ID. The value of password is passed to the stored procedure as part of payload, and is not encrypted. It is not stored in dynamic cache when parameter markers are used. Recommendation: Have the application that invokes this stored procedure pass an encrypted single-use password called a passticket. This is an input parameter of type VARCHAR(24). This parameter is NULL only when user-ID is set to NULL, and must be NULL when user-ID is NULL. begin-timestamp Specifies when a task can first begin execution. When task execution begins depends on how this and other parameters are set: Non-null value for begin-timestamp
340
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
At begin-timestamp The task execution begins at begin-timestamp if point-in-time and trigger-task-name are NULL. Next point in time defined at or after begin-timestamp The task execution begins at the next point in time defined at or after begin-timestamp if point-in-time is non-null. When trigger-task-name completes at or after begin-timestamp The task execution begins the next time that trigger-task-name completes at or after begin-timestamp. Null value for begin-timestamp Immediately The task execution begins immediately if point-in-time and trigger-task-name are NULL. Next point in time defined The task execution begins at the next point in time defined if point-in-time is non-null. When trigger-task-name completes The task execution begins the next time that trigger-task-name completes. The value of this parameter cannot be in the past, and it cannot be later than end-timestamp. This is an input parameter of type TIMESTAMP. end-timestamp Specifies when a task can last begin execution. If this parameter is set to NULL, then the task can continue to execute as scheduled indefinitely. The value of this parameter cannot be in the past, and it cannot be earlier than begin-timestamp. This is an input parameter of type TIMESTAMP. max-invocations Specifies the maximum number of executions allowed for a task. This value applies to all schedules: triggered by events, recurring by time interval, and recurring by points in time. If this parameter is set to NULL, then there is no limit to the number of times this task can execute. For tasks that execute only one time, max-invocations must be set to 1 and interval, point-in-time and trigger-task-name must be NULL. If both end-timestamp and max-invocations are specified, the first limit reached takes precedence. That is, if end-timestamp is reached, even though the number of task executions so far has not reached max-invocations, the task will not be executed again. If max-invocations have occurred, the task will not be executed again even if end-timestamp is not reached. This is an input parameter of type INTEGER. interval Defines a duration in minutes between two executions of a repetitive regular task. The first execution occurs at begin-timestamp. If this parameter is set to NULL, the task is not regularly executed. If this parameter contains a non-null value, the parameters point-in-time and trigger-task-name must be set to NULL. This is an input parameter of type INTEGER.
Chapter 16. Scheduling administrative tasks
341
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
point-in-time Defines one or more points in time when a task is executed. If this parameter is set to NULL, the task is not scheduled at fixed points in time. If this parameter contains a non-null value, the parameters interval and trigger-task-name must be set to NULL. The point-in-time string uses the UNIX cron format. The format contains the following pieces of information separated by blanks: given minute or minutes, given hour or hours, given day or days of the month, given month or months of the year, and given day or days of the week. For each part, you can specify one or several values, ranges, and so forth. This is an input parameter of type VARCHAR(400). trigger-task-name Specifies the name of the task which, when its execution is complete, will trigger the execution of this task. Task names of DB2START and DB2STOP are reserved for DB2 stop and start events respectively. Those events are handled by the scheduler associated with the DB2 subsystem that is starting or stopping. If this parameter is set to NULL, the execution of this task will not be triggered by another task. If this parameter contains a non-null value, the parameters interval and point-in-time must be set to NULL. This is an input parameter of type VARCHAR(128). trigger-task-cond Specifies the type of comparison to be made to the return code after the execution of task trigger-task-name. Possible values are: GT GE EQ LT LE NE Greater than Greater than or equal to Equal to Less than Less than or equal to Not equal to
If this parameter is set to NULL, the task execution is triggered without considering the return code of task trigger-task-name. This parameter must be set to NULL if trigger-task-name is set to NULL or is either DB2START or DB2STOP. This is an input parameter of type CHAR(2). trigger-task-code Specifies the return code from executing trigger-task-name. If the execution of this task is triggered by a stored procedure, trigger-task-code contains the SQLCODE that must be returned by the triggering stored procedure in order for this task to execute. If the execution of this task is triggered by a JCL job, trigger-task-code contains the MAXRC that must be returned by the triggering job in order for this task to execute. To find out what the MAXRC or SQLCODE of a task is after execution, invoke the user-defined function DSNADM. ADMIN_TASK_STATUS returns these information in the columns MAXRC and SQLCODE.
342
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The following restrictions apply to the value of trigger-task-code: v If trigger-task-cond is null, then trigger-task-code must also be null. v If trigger-task-cond is non-null, then trigger-task-code must also be non-null. If trigger-task-cond and trigger-task-code are not null, they are used to test the return code from executing trigger-task-name to determine whether to execute this task or not. For example, if trigger-task-cond is set to GE and trigger-task-code is set to 8, then this task will execute if and only if the previous execution of trigger-task-name returned a MAXRC (for a JCL job) or an SQLCODE (for a stored procedure) greater than or equal to 8. This is an input parameter of type INTEGER. DB2-SSID Specifies the DB2 subsystem ID whose associated scheduler should execute the task. This parameter is used in a data sharing environment where, for example different DB2 members have different configurations and executing the task relies on a certain environment. However, specifying a value in DB2-SSID will prevent schedulers of other members to execute the task, so that the task can only be executed as long as the scheduler of DB2-SSID is running. For a task being triggered by a DB2 start or DB2 stop event in trigger-task-name, specifying a value in DB2-SSID will let the task be executed only when the named subsystem is starting and stopping. If no value is given, each member that starts or stops will trigger a local execution of the task, provided that the executions are serialized. If this parameter is set to NULL, any scheduler can execute the task. This is an input parameter of type VARCHAR(4). procedure-schema Specifies the schema of the DB2 stored procedure this task will execute. If this parameter is set to NULL, DB2 uses a default schema. This parameter must be set to NULL if procedure-name is set to NULL. This is an input parameter of type VARCHAR(128). procedure-name Specifies the name of the DB2 stored procedure this task will execute. If this parameter is set to NULL, no stored procedure will be called. In this case, a JCL job must be specified. This is an input parameter of type VARCHAR(128). procedure-input Specifies the input parameters of the DB2 stored procedure this task will execute. This parameter must contain a DB2 SELECT statement that returns one row of data. The returned values will be passed as parameter to the stored procedure. If this parameter is set to NULL, no parameters are passed to the stored procedure. This parameter must be set to NULL when procedure-name is set to NULL. This is an input parameter of type VARCHAR(4096). JCL-library Specifies the name of the data set where the JCL job to be executed is saved.
Chapter 16. Scheduling administrative tasks
343
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
If this parameter is set to NULL, no JCL job will be executed. In this case, a stored procedure must be specified. This is an input parameter of type VARCHAR(44). JCL-member Specifies the name of the library member where JCL job to be executed is saved. If this parameter is set to NULL, the data set specified in JCL-library must be sequential and contain the JCL job to be executed. This parameter must be set to NULL if JCL-library is set to NULL. This is an input parameter of type VARCHAR(8). job-wait Specifies whether the job can be executed synchronously or not. This parameter can only be set to NULL if JCL-library is set to NULL. Otherwise, it must be one of the following values: NO YES PURGE Synchronous execution after which the job status in z/OS is purged This is an input parameter of type VARCHAR(8). task-name Specifies a unique name assigned to the task. A unique task name is returned when the task is created with a NULL task-name value. This name is of the format TASK_ID_xxxx where xxxx is 0001 for the first task named, 0002 for the second task, and so forth. The following task names are reserved and cannot be given as the value of task-name: v Names starting with TASK_ID_ v DB2START v DB2STOP This is an input-output parameter of type VARCHAR(128). description Specifies a description assigned to the task. This is an input parameter of type VARCHAR(128). return-code Provides the return code from the stored procedure. Possible values are: 0 12 The call completed successfully. The call did not complete successfully. The message output parameter contains messages describing the error. Asynchronous execution Synchronous execution
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. The first messages in this area, if any, are generated by the stored procedure. Messages that are generated by DB2 might follow the first messages. This is an output parameter of type VARCHAR(1331).
344
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example
The following Java sample shows how to invoke ADMIN_TASK_ADD:
import import import import import import java.sql.CallableStatement; java.sql.Connection; java.sql.DriverManager; java.sql.Statement; java.sql.Timestamp; java.sql.Types;
Connection con = DriverManager.getConnection ("jdbc:db2://myserver:myport/mydatabase", "myuser", "mypassword"); CallableStatement callStmt = con.prepareCall ("CALL SYSPROC.ADMIN_TASK_ADD(" + "?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)"); // provide the authid callStmt.setString(1, "myexecuser"); // provide the password callStmt.setString(2, "myexecpwd"); // set the start time to now callStmt.setNull(3, Types.TIMESTAMP); // no end time callStmt.setNull(4, Types.TIMESTAMP); // set the max invocation callStmt.setInt(5, 1); // This is a non recurrent task callStmt.setNull(6, Types.INTEGER); callStmt.setNull(7, Types.VARCHAR); callStmt.setNull(8, Types.VARCHAR); callStmt.setNull(9, Types.CHAR); callStmt.setNull(10, Types.INTEGER); callStmt.setNull(11, Types.VARCHAR); // provide the stored procedure schema callStmt.setString(12, "MYSCHEMA"); // provide the name of the stored procedure to be executed callStmt.setString(13, "MYPROC"); // provide the stored procedure input parameter callStmt.setString(14, "SELECT 1 FROM SYSIBM.SYSDUMMY1"); // This is not a JCL job callStmt.setNull(15, Types.VARCHAR); callStmt.setNull(16, Types.VARCHAR); callStmt.setNull(17, Types.VARCHAR); // add a new task with task name mytask callStmt.setString(18, "mytask"); callStmt.registerOutParameter(18, Types.VARCHAR); // provide the task description callStmt.setString(19, "MY DESCRIPTION"); // register output parameters for error management callStmt.registerOutParameter(20, Types.INTEGER); callStmt.registerOutParameter(21, Types.VARCHAR); // execute the statement callStmt.execute(); // manage the return code if ( callStmt.getInt(20) == 0 ) { System.out.print("\nSuccessfully added task " + callStmt.getString(18)); } else { System.out.print("\nError code and message are: " + callStmt.getInt(20) + "/" + callStmt.getString(21)); }
345
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Output
The output of this stored procedure is the task name, task-name and the following output parameters, which are described in Option descriptions on page 340: v return-code v message
day of month 1-31 month v 1-12, where 1 is January v Upper-, lower-, or mixed-case three-character strings, based on the English name of the month: jan, feb, mar, apr, may, jun, jul, aug, sep, oct, nov, or dec. day of week v 0-7, where 0 or 7 is Sunday v Upper-, lower-, or mixed-case three-character strings, based on the English name of the day: mon, tue, wed, thu, fri, sat, or sun.
346
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Unrestricted range
A field can contain an asterisk (*), which represents all possible values in the field. The day of a commands execution can be specified by two fields: day of month and day of week. If both fields are restricted by the use of a value other than the asterisk, the command will run when either field matches the current time. Example: The value 30 4 1,15 * 5 causes a command to run at 4:30 AM on the 1st and 15th of each month, plus every Friday.
Step values
Step values can be used in conjunction with ranges. The syntax range/step defines the range and an execution interval. If you specify first-last/step, execution takes place at first, then at all successive values that are distant from first by step, until last. Example: To specify command execution every other hour, use 0-23/2. This expression is equivalent to the value 0,2,4,6,8,10,12,14,16,18,20,22. If you specify */step, execution takes place at every interval of step through the unrestricted range. Example: As an alternative to 0-23/2 for execution every other hour, use */2.
347
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The table function ADMIN_TASK_STATUS contacts the administrative task scheduler in order to update the DB2 task list in table SYSIBM.ADMIN_TASKS, if necessary, and then reads the tasks from this task list directly. To determine the last execution status of a scheduled task: 1. Execute the table function ADMIN_TASK_STATUS to generate the status table. See the information about ADMIN_TASK_LIST in DB2 SQL Reference. 2. Select the rows in the table that correspond to the task name. Tip: You can relate the task execution status to the task definition by joining the output tables from ADMIN_TASK_LIST and ADMIN_TASK_STATUS on the TASK_NAME column. The table created by ADMIN_TASK_STATUS indicates the last execution of scheduled tasks. Each row is indexed by the task name and contains the last execution status of the corresponding task. If task execution has never been attempted, because the execution criteria have not been met, the STATUS column contains a null value. If the scheduler was not able to start executing the task, the STATUS column contains NOTRUN. The START_TIMESTAMP and END_TIMESTAMP columns are the same, and the MSG column indicates why the task execution could not be started. All JCL job execution status columns are NULL, but the DB2 execution status columns contain values if the reason for the failure is related to DB2. (For example, a DB2 connection could not be established.) If the scheduler started executing the task but the task has not yet completed, the STATUS column contains RUNNING. All other execution status columns contain null values. If the task execution has completed, the STATUS column contains COMPLETED. The START_TIMESTAMP and END_TIMESTAMP columns contain the actual start and end times. The MSG column might contain informational or error messages. The DB2 and JCL columns are filled with values when they apply. If the scheduler was stopped during the execution of a task, the status remains RUNNING until the scheduler is restarted. When the scheduler starts again, the status is changed to UNKNOWN, because the scheduler cannot determine if the task was completed.
348
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
To remove a scheduled task: 1. Optional: Issue the following SQL statement to identify tasks that will never execute again:
SELECT T.TASK_NAME FROM TABLE (DSNADM.ADMIN_TASK_LIST()) T, TABLE (DSNADM.ADMIN_TASK_STATUS()) S WHERE T.TASK_NAME = S.TASK_NAME AND (S.NUM_INVOCATIONS = T.MAX_INVOCATIONS OR T.END_TIMESTAMP < CURRENT TIMESTAMP) AND STATUS <> 'RUNNING'
2. Confirm the name of the task that you want to remove. 3. Call the stored procedure ADMIN_TASK_REMOVE. You must provide the task name as a parameter to the stored procedure. The scheduled task is removed from the task list and its last execution status is deleted. Listing the scheduled tasks and execution statuses no longer returns a row for this task. The task name is freed up for future reuse.
ADMIN_TASK_REMOVE
The SYSPROC.ADMIN_TASK_REMOVE stored procedure removes a task from the task list of the administrative scheduler. If the task is currently running, it continues to execute until completion, and the task is not removed from the administrative scheduler task list. If other tasks depend on the execution of the task to be removed, this task is not removed from the administrative scheduler task list.
Environment
See the recommended environment in the installation job DSNTIJRA.
Authorization
Users with SYSOPR, SYSCTRL, or SYSADM authority can remove any task. Other users who have EXECUTE authority on this stored procedure can remove tasks that they added. Attempting to remove a task that was added by a different user returns an error in the output.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
Option descriptions
task-name Specifies the task name of the task to be removed from the administrative scheduler task list. This is an input parameter of type VARCHAR(128) and cannot be null.
349
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
return-code Provides the return code from the stored procedure. Possible values are: 0 12 The call completed successfully. The call did not complete successfully. The message output parameter contains messages describing the error.
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. The first messages in this area, if any, are generated by the stored procedure. Messages that are generated by DB2 might follow the first messages. This is an output parameter of type VARCHAR(1331).
Example
The following Java sample shows how to invoke ADMIN_TASK_REMOVE:
import import import import import import java.sql.CallableStatement; java.sql.Connection; java.sql.DriverManager; java.sql.Statement; java.sql.Timestamp; java.sql.Types;
Connection con = DriverManager.getConnection("jdbc:db2://myserver:myport/mydatabase", "myuser", "mypassword"); CallableStatement callStmt = con.prepareCall("CALL SYSPROC.ADMIN_TASK_REMOVE(?, ?, ?)"); // provide the id of the task to be removed callStmt.setString(1, "mytask"); // register output parameters for error management callStmt.registerOutParameter(2, Types.INTEGER); callStmt.registerOutParameter(3, Types.VARCHAR); // execute the statement callStmt.execute(); // manage the return code if ( callStmt.getInt(2) == 0 ) { System.out.print("\nSuccessfully removed task " + callStmt.getString(1)); } else { System.out.print("\nError code and message are: " + callStmt.getInt(2) + "/" + callStmt.getString(3)); }
Output
The output of this stored procedure includes the following output parameters, which are described in Option descriptions on page 349: v return-code v message
350
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v To start a scheduler named admtproc from the operators console with tracing enabled, issue the MVS system command:
start admtproc,trace=on
v To start a scheduler named admtproc from the operators console with tracing disabled, issue the MVS system command:
start admtproc,trace=off
When the administrative scheduler starts, message DSNA671I displays on the console.
The scheduler stops accepting requests and will not start new task executions. It waits until the execution of all currently running tasks completes and then terminates. Message DSNA670I displays on the console. v Alternate method: If the MODIFY command does not shut down the scheduler, issue the MVS system command:
stop admtproc
Any task that was invoked by the scheduler and is currently executing is interrupted. Message DSNA670I displays on the console. Interrupted tasks keep their status as RUNNING and are not rescheduled until the scheduler is started again. At startup, the status of the interrupted tasks is set to UNKNOWN and message DSNA690I is written into the status. Look for UNKNOWN in the results of the ADMIN_TASK_STATUS user-defined function. If UNKNOWN is present in the STATUS column of the output table, then you should check to see if the task has completed. If an interrupted task has not completed, you should terminate the work.
351
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
common task list. The task list is shared among all schedulers associated with the data sharing group members. The next time a scheduler accesses the list, it will detect the new task. All schedulers of the data sharing group access this task list once per minute, to check for new tasks. The scheduler that adds a task does not have to check the list, and can execute the task immediately. Any other scheduler can execute a task only after to finding it in the updated task list. Any scheduler can remove a task without waiting. In order to remove a task, the stored procedure ADMIN_TASK_REMOVE is called by a DB2 member. The scheduler associated with this DB2 member removes the task from the common task list. The next time a scheduler checks the list, within one minute after the task has been removed, it detects that the task has been deleted. No scheduler can execute a task without first locking it in the task list. This locking prevents deleted tasks from being executed: the task is no longer present in the list, so it cannot be locked. Because it cannot be locked, it cannot be executed. The locking also prevents double executions: a task that one scheduler has in progress is already locked, so no other scheduler can lock the task.
Substitute the name of your scheduler for admtproc. v To stop a trace for a scheduler named admtproc, issue the following MVS system command:
modify admtproc,appl=trace=off
v To configure the system so that tracing starts automatically when the scheduler starts, modify the procedure parameter TRACE in the JCL job that starts the scheduler. This job has the name that was assigned when the scheduler was installed. The job was copied into one of the PROCLIB library during the installation. Specify TRACE=ON. To disable tracing, change the parameter to TRACE=OFF.
352
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
One copy of the task list is a shared VSAM data set, by default DSNC910.TASKLIST, where DSNC910 is the DB2 catalog prefix. The other copy is stored in the table ADMIN_TASKS in the SYSIBM schema. Include these redundant copies as part of your backup and recovery plan. Tip: If DB2 is offline, message DSNA679I displays on the console. As soon as DB2 starts, the administrative task scheduler performs an autonomic recovery of the ADMIN_TASKS table using the contents of the VSAM task list. When the recovery is complete, message DSNA695I displays on the console to indicate that both task lists are again available. (By default, message DSNA679I displays on the console once per minute when DB2 is offline. You can change the frequency of this message by modifying the ERRFREQ parameter either as part of the started task or with a console command.) Use the following procedures to recover the task list if it is lost or damaged: v To recover if the ADMIN_TASKS task list is corrupted: 1. Create a new and operable version of the table. 2. Grant SELECT, UPDATE, INSERT and DELETE privileges on the table to the administrative scheduler started task user. As soon as the ADMIN_TASKS table is accessible again, the scheduler performs an autonomic recovery of the table using the content of the VSAM task list. v To recover if the VSAM file is corrupted, create an empty version of the VSAM task list. As soon as the VSAM task list is accessible again, the scheduler performs an autonomic recovery using the content of the ADMIN_TASKS task list. v If both task lists (the VSAM data set and the ADMIN_TASKS table) are corrupted and inaccessible, the scheduler is no longer operable. Messages DSNA681I and DSNA683I display on the console and the scheduler terminates. To recover from this situation: 1. Create an empty version of the VSAM task list. 2. Recover the table space DSNADMDB.DSNADMTS, where the ADMIN_TASKS table is located. 3. Restart the administrative scheduler. As soon as both task lists are accessible again, the scheduler performs an autonomic recovery of the VSAM task list using the content of the recovered ADMIN_TASKS table.
Symptoms
A task was scheduled successfully, but the action did not complete or did not complete correctly.
353
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Important: The task status is overwritten as soon as the next execution of the task starts.
Symptoms
An SQL code is returned. When SQLCODE is -443, the error message cannot be read directly, because only a few characters are available.
Any other value Any other SQLCODE indicates that the error is not in the function or in the scheduler. Troubleshoot the task itself.
Symptoms
An SQL code is returned.
354
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Errors can originate with the stored procedure itself or with the scheduler, in which case the error information is transmitted back to the stored procedure for output. Most error messages are clearly in one category or the other. For example, DSNA650I csect-name CANNOT CONNECT TO ADMIN SCHEDULER proc-name indicates an error from the stored procedure. DSNA652I csect-name THE USER user-name IS NOT ALLOWED TO ACCESS TASK task-name belongs to the administrative task scheduler, which is checking the parameters and authorization information passed to it. Understanding the source of the error should be enough to correct the cause of the problem. Most problems are incorrect usage of the stored procedure or an invalid configuration. Correct the underlying problem and resubmit the call to the stored procedure to add or remove the task.
355
|
Address space name is same as started task name DB2AMSTR START DB2AADMT START
Scheduler
SSID = DB2A
SQL
SQL
consistency
| | Figure 27. Architecture of the administrative task scheduler | Related reference | ADMIN_TASK_ADD on page 339 | ADMIN_TASK_REMOVE on page 349 | | | | | | | | | | | | | | | | |
356
Administration Guide
Start
scheduler
yes Scheduler with name in ADMTPROC already running? no Start
Stop
| | | | | | | | | | | | | | | | | | |
357
| | | | | | | | | | | | | |
DB2AMSTR
DB2BMSTR
Coupling facility
DB2 task list SYSIBM.ADMIN_TASKS
DB2AADMT
DB2BADMT
consistency
Started task
DB2 association DB2SSID = DB2A Security DFLTUID = ... External task list ADMTDD1 = prefix.TASKLIST
Started task
DB2 association DB2SSID = DB2B Security DFLTUID = ... External task list ADMTDD1 = prefix.TASKLIST
| | Figure 29. Administrative task schedulers in a data sharing group | | Tasks are not localized to a scheduler. They can be added, removed, or executed in | any of the schedulers in the data sharing group with the same result. However, | you can force the task to execute on a given scheduler by specifying the associated | DB2 subsystem ID in the DB2SSID parameter when you schedule the task. The | tasks that have no affinity to a given DB2 subsystem are executed among all | schedulers. Their distribution cannot be predicted.
358
Administration Guide
| | | | | | | | | | |
JES reader
Execute JCL SQL Call
DB2AADMT
DFLTUID XXX Task 2 Execution thread Task 1 SQL call Execution stored procedures thread
DB2AMSTR
3 Get PassTicket
and log in
RACF
PassTickets scheduler Users
2 Check
Call
TASK 1 USERID = NULL PASSWORD = NULL TASK 2 USERID = XXX PASSWORD = ***
XXX
DFLTUID
credentials Passwords
granted users
consistency
| | | | | | | | | | | | |
359
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The users or groups of users who have access to the SQL interface of the scheduler are allowed to add, remove or list scheduled tasks. To specify who is authorized to add, remove or list a scheduled task, use the GRANT command in DB2. All interface users are granted EXECUTE access on the scheduler stored procedures and user-defined table functions. They also are granted READ access on the DB2 table SYSIBM.ADMIN_TASKS. Each scheduled task in the scheduler is associated with an execution user who will execute this task. When not explicitly given by the user, a default execution user DFLTUID defined in the scheduler started task is used. The scheduler execution threads switch to the security context of this user before executing the task.
360
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
A similar security concept is implemented for the SYSIBM.ADMIN_TASKS table, which stores a redundant copy of the scheduled tasks. Only the scheduler started tasks user has SELECT, INSERT, DELETE, or UPDATE authority on this resource. Users with EXECUTE rights on the user-defined functions ADMIN_TASK_LIST and ADMIN_TASK_STATUS have only SELECT authority on the table SYSIBM.ADMIN_TASKS.
361
| | | | | | | | | | | | | | | | | | | | | | | | | | | | |
362
Administration Guide
|
MAXTHD -Parameter of the started task. -Control maximum number of execution threads. -Default value is 99.
JES reader
Execute JCL DB2AMSTR
Execution thread
Execution thread
Interface
Call Add Task
Call User-defined functions SQL select from ADMIN_TASK_LIST() ADMIN_TASK_STATUS() Call Remove Task name
scheduler
consistency
| | | | | | | | | | | | | | | | | | | | |
The minimum permitted value for MAXTHD is 1, but should not be lower than the maximum number of tasks that you expect to execute simultaneously. If there are more tasks to be executed simultaneously than sub-threads available, some tasks will not start executing immediately. The scheduler will try to find an available sub-thread within one minute of when the task is scheduled for execution. As a result, multiple short tasks might be serialized in the same sub-thread, provided that their total execution time does not go over this minute. The parameters of the scheduler started task are not positional. Place parameters in a single string separated by blank spaces. If a task execution still cannot be started one minute after it should have been, the execution is skipped and the last execution status of this task is set to the NOTRUN state. The following message displays on the operators console.
DSNA678I csect-name THE NUMBER OF TASKS TO BE CONCURRENTLY EXECUTED BY THE ADMIN SCHEDULER proc-name EXCEEDS max-threads
If this happens, increase the MAXTHD parameter value and restart the scheduler.
363
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
One execution sub-thread of the scheduler is used to execute a task that describes a stored procedure call. The execution sub-thread first connects to the DB2 member indicated in the DB2SSID parameter of the scheduler started task. If the connection cannot be established, the execution is skipped, and the last execution status is set to the NOTRUN state. After establishing a connection, the parameter values of the stored procedure are retrieved from DB2 by using the select statement that is defined in the task parameter procedure-input. If an error occurs retrieving those parameter values, the last execution status of the task is set to the error that is returned by DB2, and the stored procedure is not called. Otherwise, the stored procedure is called with the retrieved parameter values. The stored procedure name is concatenated from the task parameters procedure-schema and procedure-name. The SQL CALL command is synchronous, and the execution thread is blocked until the stored procedure finishes execution. Then, the last execution status is set to the values that are returned by DB2. Finally, a COMMIT statement is issued, and the connection to DB2 is closed. The execution status of DB2 stored procedures always contains null values in the JOB_ID, MAXRC, COMPLETION_TYPE, SYSTEM_ABENDCD and USER_ABENDCD fields. In the case of a DB2 error, the fields SQLCODE, SQLSTATE, SQLERRMC, and SQLERRP will contain the values that DB2 returned to the stored procedure.
364
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v If job_wait is set to NO, the sub-thread does not wait until the job completes execution and returns immediately after the job submission. The task execution status is set to the submission status, the result of the job execution itself will not be available. v If job_wait is set to YES, the sub-thread simulates a synchronous execution of the JCL job. It waits until the job execution completes, get the job status from the JES reader and fills in the last execution status of the task. v If job_wait is set toPURGE, the sub-thread purges the job output from the JES reader after execution. Execution is the same as for job_wait=YES. JCL job execution status always contains a null value in the SQLCODE, SQLSTATE, SQLERRMC, and SQLERRP fields. If the job can be submitted successfully to the JES reader, the field JOB_ID contains the ID of the job in the JES reader. If the job is executed asynchronously, the MAXRC, COMPLETION_TYPE, SYSTEM_ABENDCD and USER_ABENDCD fields will also be null values, because the sub-thread does not wait for job completion before writing the status. If the job was executed synchronously, those fields contain the values returned by the JES reader.
365
366
Administration Guide
367
DISPLAY DATABASE Displays status, user, and locking information for a database. For its use, see Monitoring databases on page 369. STOP DATABASE Makes a database, or individual partitions, unavailable after existing users have quiesced. DB2 also closes and deallocates the data sets. For its use, see Stopping databases on page 375. The START and STOP DATABASE commands can be used with the SPACENAM and PART options to control table spaces, index spaces, or partitions. For example, the following command starts two partitions of table space DSN8S81E in the database DSN8D81A:
-START DATABASE (DSN8D81A) SPACENAM (DSN8S81E) PART (1,2)
Starting databases
The command START DATABASE (*) starts all databases for which you have the STARTDB privilege. The privilege can be explicitly granted, or can belong implicitly to a level of authority (DBMAINT and above, as shown in Figure 9 on page 139). The command starts the database, but not necessarily all the objects it contains. Any table spaces or index spaces in a restricted mode remain in a restricted mode and are not started. START DATABASE (*) does not start the DB2 directory (DSNDB01), the DB2 catalog (DSNDB06), or the DB2 work file database (called DSNDB07, except in a data sharing environment). These databases have to be started explicitly using the SPACENAM option. Also, START DATABASE (*) does not start table spaces or index spaces that have been explicitly stopped by the STOP DATABASE command. The PART keyword of the command START DATABASE can be used to start individual partitions of a table space. It can also be used to start individual partitions of a partitioning index or logical partitions of a nonpartitioning index. The started or stopped state of other partitions is unchanged.
Databases, table spaces, and index spaces are started with RW status when they are created. You can make any of them unavailable by using the command STOP DATABASE. DB2 can also make them unavailable when it detects an error. In cases when the object was explicitly stopped, you can make them available again using the command START DATABASE. For example, the following command starts all table spaces and index spaces in database DSN8D81A for read-only access:
-START DATABASE (DSN8D81A) SPACENAM(*) ACCESS(RO)
368
Administration Guide
The command releases most restrictions for the named objects. These objects must be explicitly named in a list following the SPACENAM option. DB2 cannot process the START DATABASE ACCESS(FORCE) request if postponed abort or indoubt URs exist. The RESTP (restart-pending) status and the AREST (advisory restart-pending) status remain in effect until either automatic backout processing completes or you perform one of the following actions: v Issue the RECOVER POSTPONED command to complete backout activity. v Issue the RECOVER POSTPONED CANCEL command to cancel all of the postponed abort units of recovery. v Conditionally restart or cold start DB2. | | | | | | | | DB2 cannot apply the START DATABASE ACCESS(FORCE) command to that object, if a utility from a previous release of DB2 places an object in one of the following restrictive states: v UTRO (utility restrictive state, read-only access allowed) v UTRW (utility restrictive state, read and write access allowed) v UTUT (utility restrictive state, utility exclusive control) To reset these restrictive states, you must start the release of DB2 that originally ran the utility and terminate the utility from that release. For more information about resolving postponed units of recovery, see Resolving postponed units of recovery on page 456.
Monitoring databases
You can use the command DISPLAY DATABASE to obtain information about the status of databases and the table spaces and index spaces within each database. If applicable, the output also includes information about physical I/O errors for those objects. Use DISPLAY DATABASE as follows:
-DISPLAY DATABASE (dbname)
369
DSNT360I - **************************************************** DSNT361I - * DISPLAY DATABASE SUMMARY * report_type_list DSNT360I - **************************************************** DSNT362I DATABASE = dbname STATUS = xx DBD LENGTH = yyyy
11:44:32 DSNT397I NAME TYPE PART STATUS PHYERRLO PHYERRHI CATALOG PIECE -------- ---- ----- --------------- --------- -------- -------- ----D1 TS RW,UTRO D2 TS RW D3 TS STOP D4 IX RO D5 IX STOP D6 IX UT LOB1 LS RW ******* DISPLAY OF DATABASE dbname ENDED ********************** 11:45:15 DSN9022I - DSNTDDIS 'DISPLAY DATABASE' NORMAL COMPLETION
In the preceding messages: v Report_type_list indicates which options were included when the DISPLAY DATABASE command was issued. See Chapter 2 of DB2 Command Reference for detailed descriptions of options. v dbname is an 8-byte character string indicating the database name. The pattern-matching character, *, is allowed at the beginning, middle, and end of dbname. v STATUS is a combination of one or more status codes delimited by a comma. The maximum length of the string is 17 characters. If the status exceeds 17 characters, those characters are wrapped onto the next status line. Anything that exceeds 17 characters on the second status line is truncated. See Chapter 2 of DB2 Command Reference for a list of status codes and their descriptions. You can use the pattern-matching character, *, in the commands DISPLAY DATABASE, START DATABASE, and STOP DATABASE. The pattern-matching character can be used in the beginning, middle, and end of the database and table space names. Using ONLY: The keyword ONLY can be added to the command DISPLAY DATABASE. When ONLY is specified with the DATABASE keyword but not the SPACENAM keyword, all other keywords except RESTRICT, LIMIT, and AFTER are ignored. Use DISPLAY DATABASE ONLY as follows:
-DISPLAY DATABASE(*S*DB*) ONLY
370
Administration Guide
v DATABASE (*S*DB*) displays databases that begin with any letter, have the letter S followed by any letters, then the letters DB followed by any letters. v ONLY restricts the display to databases names that fit the criteria. See Chapter 2 of DB2 Command Reference for detailed descriptions of these and other options on the DISPLAY DATABASE command. Using RESTRICT: You can use the RESTRICT option of the DISPLAY DATABASE command to limit the display to objects that are currently set to a restrictive status. You can additionally specify one or more keywords to limit the display further to include only objects that are set to a particular restrictive status. For information about resetting a restrictive status, see Appendix C of DB2 Utility Guide and Reference. Using ADVISORY: You can use the ADVISORY option on the DISPLAY DATABASE command to limit the display to table spaces or indexes that require some corrective action. Use the DISPLAY DATABASE ADVISORY command without the RESTRICT option to determine when: v An index space is in the informational copy pending (ICOPY) advisory status v A base table space is in the auxiliary warning (AUXW) advisory status For information about resetting an advisory status, see Appendix C of DB2 Utility Guide and Reference. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using OVERVIEW: To display all objects within a database, you can use the OVERVIEW option of the DISPLAY DATABASE command. This option shows each object in the database on one line, does not break down an object by partition, and does not show exception states. The OVERVIEW option displays only object types and the number of data set partitions in each object. OVERVIEW is mutually exclusive with all keywords other than SPACENAM, LIMIT, and AFTER. Use DISPLAY DATABASE OVERVIEW as follows:
-DISPLAY DATABASE(DB486A) SPACENAM(*) OVERVIEW
The display indicates that there are five objects in database DB486A, two table spaces and three indexes. Table space TS486A has four parts, and table space TS486C is nonpartitioned. Index IX486A is a nonpartitioning index for table space TS486A, and index IX486B is a partitioned index with four parts for table space TS486A. Index IX486C is a nonpartitioned index for table space TS486C.
371
Which programs are holding locks on the objects? To determine which application programs are currently holding locks on the database or space, issue a command like the following, which names table space TSPART in database DB01:
-DISPLAY DATABASE(DB01) SPACENAM(TSPART) LOCKS
For an explanation of the field LOCKINFO, see message DSNT396I in Part 2 of DB2 Messages.
372
Administration Guide
Use the LOCKS ONLY keywords on the DISPLAY DATABASE command to display only spaces that have locks. The LOCKS keyword can be substituted with USE, CLAIMERS, LPL, or WEPR to display only databases that fit the criteria. Use DISPLAY DATABASE as follows:
-DISPLAY DATABASE (DSNDB06) SPACENAM(*) LOCKS ONLY
See Chapter 2 of DB2 Command Reference for detailed descriptions of these and other options of the DISPLAY DATABASE command.
373
DSNT360I = *********************************************************** DSNT361I = * DISPLAY DATABASE SUMMARY * GLOBAL LPL DSNT360I = *********************************************************** DSNT362I = DATABASE = DBFW8401 STATUS = RW,LPL DBD LENGTH = 8066 DSNT397I = NAME TYPE PART STATUS LPL PAGES -------- ---- ----- ----------------- -----------------TPFW8401 TS 0001 RW,LPL 000000-000004 ICFW8401 IX L0001 RW,LPL 000000,000003 IXFW8402 IX RW,LPL 000000,000003-000005 ---000007,000008-00000B ---000080-000090 ******* DISPLAY OF DATABASE DBFW8401 ENDED ********************** DSN9022I = DSNTDDIS 'DISPLAY DATABASE' NORMAL COMPLETION
The display indicates that the pages listed in the LPL PAGES column are unavailable for access. For the syntax and description of DISPLAY DATABASE, see Chapter 2 of DB2 Command Reference. | | | | Removing pages from the LPL: The DB2 subsystem always attempts automated recovery of LPL pages when the pages are added to the LPL. Manual recover can also be performed. When an object has pages on the LPL, there are several ways to manually remove those pages and make them available for access when DB2 is running: v Start the object with access (RW) or (RO). That command is valid even if the table space is already started. When you issue the command START DATABASE, you see message DSNI006I, indicating that LPL recovery has begun. Message DSNI022I is issued periodically to give you the progress of the recovery. When recovery is complete, you see DSNI021I. When you issue the command START DATABASE for a LOB table space that is defined as LOG NO, and DB2 detects log records required for LPL recovery are missing due to the LOG NO attribute, the LOB table space is placed in AUXW status and the LOB is invalidated. v Run the RECOVER or REBUILD INDEX utility on the object. The only exception to this is when a logical partition of a nonpartitioned index has both LPL and RECP status. If you want to recover the logical partition using REBUILD INDEX with the PART keyword, you must first use the command START DATABASE to clear the LPL pages. v Run the LOAD utility with the REPLACE option on the object. v Issue an SQL DROP statement for the object. Only the following utilities can be run on an object with pages in the LPL: LOAD with the REPLACE option MERGECOPY REBUILD INDEX RECOVER, except: RECOVER...PAGE RECOVER...ERROR RANGE REPAIR with the SET statement REPORT Displaying a write error page range: Use DISPLAY DATABASE to display the range of error pages. For example:
-DISPLAY DATABASE (DBPARTS) SPACENAM (TSPART01) WEPR
374
Administration Guide
In the previous messages: v PHYERRLO and PHYERRHI identify the range of pages that were being read when the I/O errors occurred. PHYERRLO is an 8-digit hexadecimal number representing the lowest page found in error, while PHYERRHI represents the highest page found in error. v PIECE, a 3-digit integer, is a unique identifier for the data set supporting the page set that contains physical I/O errors. For additional information about this list, see the description of message DSNT392I in Part 2 of DB2 Messages.
Stopping databases
Databases, table spaces, and index spaces can be made unavailable with the STOP DATABASE command. You can also use STOP DATABASE with the PART option to stop the following types of partitions: v Physical partitions within a table space v Physical partitions within an index space v Logical partitions within a nonpartitioning index associated with a partitioned table space. This prevents access to individual partitions within a table or index space while allowing access to the others. When you specify the PART option with STOP DATABASE on physically partitioned spaces, the data sets supporting the given physical partitions are closed and do not affect the remaining partitions. However, STOP DATABASE with the PART option does not close data sets associated with logically partitioned spaces. To close these data sets, you must execute STOP DATABASE without the PART option. The AT(COMMIT) option of STOP DATABASE stops objects quickly. The AT(COMMIT) option interrupts threads that are bound with RELEASE(DEALLOCATE) and is useful when thread reuse is high. If you specify AT(COMMIT), DB2 takes over access to an object when all jobs release their claims on it and when all utilities release their drain locks on it. If you do not specify AT(COMMIT), the objects are not stopped until all existing applications have deallocated. New transactions continue to be scheduled, but they receive SQLCODE -904 SQLSTATE '57011' (resource unavailable) on the first SQL statement that references the object or when the plan is prepared for execution. STOP DATABASE waits for a lock on an object that it is attempting to stop. If the
Chapter 17. Monitoring and controlling DB2 and its connections
375
wait time limit for locks (15 timeouts) is exceeded, then the STOP DATABASE command terminates abnormally and leaves the object in stop pending status (STOPP). Database DSNDB01 and table spaces DSNDB01.DBD01 and DSNDB01.SYSLGRNX must be started before stopping user-defined databases or the work file database. A DSNI003I message tells you that the command was unable to stop an object. You must resolve the problem indicated by this message and run the job again. If an object is in STOPP status, you must first issue the START DATABASE command to remove the STOPP status and then issue the STOP DATABASE command. DB2 subsystem databases (catalog, directory, work file) can also be stopped. After the directory is stopped, installation SYSADM authority is required to restart it. The following examples illustrate ways to use the command: -STOP DATABASE (*) Stops all databases for which you have STOPDB authorization, except the DB2 directory (DSNDB01), the DB2 catalog (DSNDB06), or the DB2 work file database (called DSNDB07, except in a data sharing environment), all of which must be stopped explicitly. -STOP DATABASE (dbname) Stops a database, and closes all of the data sets of the table spaces and index spaces in the database. -STOP DATABASE (dbname, ...) Stops the named databases and closes all of the table spaces and index spaces in the databases. If DSNDB01 is named in the database list, it should be last on the list because stopping the other databases requires that DSNDB01 be available. -STOP DATABASE (dbname) SPACENAM (*) Stops and closes all of the data sets of the table spaces and index spaces in the database. The status of the named database does not change. -STOP DATABASE (dbname) SPACENAM (space-name, ...) Stops and closes the data sets of the named table space or index space. The status of the named database does not change. -STOP DATABASE (dbname) SPACENAM (space-name, ...) PART (integer) Stops and closes the specified partition of the named table space or index space. The status of the named database does not change. If the named index space is nonpartitioned, DB2 cannot close the specified logical partition. The data sets containing a table space are closed and deallocated by the preceding commands.
| |
376
Administration Guide
See Chapter 2 of DB2 Command Reference for descriptions of the options you can use with this command and the information you find in the summary and detail reports.
377
DSNX9DIS DISPLAY FUNCTION SPECIFIC REPORT COMPLETE DSN9022I - DSNX9COM '-DISPLAY FUNC' NORMAL COMPLETION
378
Administration Guide
DSNX974I STOP FUNCTION SPECIFIC SUCCESSFUL FOR PAYROLL.USERFN1 DSNX974I STOP FUNCTION SPECIFIC SUCCESSFUL FOR PAYROLL.USERFN3
To change the status of an object, use the ACCESS option of the START DATABASE command to start the object with a new status. For example:
-START DATABASE (DSN8D61A) ACCESS(RO)
For more information about concurrency and compatibility of individual online utilities, see Part 2 of DB2 Utility Guide and Reference. For a general discussion about controlling concurrency for utilities, see Part 5 (Volume 2) of DB2 Administration Guide.
Chapter 17. Monitoring and controlling DB2 and its connections
379
Stand-alone utilities
The following stand-alone utilities can be run only by means of JCL: DSN1CHKR DSN1COPY DSN1COMP DSN1PRNT DSN1SDMP DSN1LOGP DSNJLOGF DSNJU003 (change log inventory) DSNJU004 (print log map) Most of the stand-alone utilities can be used while DB2 is running. However, for consistency of output, the table spaces and index spaces must be stopped first because these utilities do not have access to the DB2 buffer pools. In some cases, DB2 must be running or stopped before you invoke the utility. See Part 3 of DB2 Utility Guide and Reference for detailed environmental information about these utilities. Stand-alone utility job streams require that you code specific data set names in the JCL. To determine the fifth qualifier in the data set name, you need to query the DB2 catalog tables SYSIBM.SYSTABLEPART and SYSIBM.SYSINDEXPART to determine the IPREFIX column that corresponds to the required data set. The change log inventory utility (DSNJU003) enables you to change the contents of the bootstrap data set (BSDS). This utility cannot be run while DB2 is running because inconsistencies could result. Use STOP DB2 MODE(QUIESCE) to stop the DB2 subsystem, run the utility, and then restart DB2 with the START DB2 command. The print log map utility (DSNJU004) enables you to print the the bootstrap data set contents. The utility can be run when DB2 is active or inactive; however, when it is run with DB2 active, the users JCL and the DB2 started task must both specify DISP=SHR for the BSDS data sets.
380
Administration Guide
MODIFY irlmproc,ABEND,NODUMP Abends the IRLM but does not generate a dump. MODIFY irlmproc,DIAG Initiates diagnostic dumps for IRLM subsystems in a data sharing group when there is a delay. | | MODIFY irlmproc,SET Sets dynamically the maximum amount of PVT storage or the number of trace buffers used for this IRLM. MODIFY irlmproc,STATUS Displays the status for the subsystems on this IRLM. START irlmproc Starts the IRLM. STOP irlmproc Stops the IRLM normally. TRACE CT,OFF,COMP=irlmnm Stops IRLM tracing. TRACE CT,ON,COMP=irlmnm Starts IRLM tracing for all subtypes (DBM,SLM,XIT,XCF). TRACE CT,ON,COMP=irlmnm,SUB=(subname) Starts IRLM tracing for a single subtype.
Consider starting the IRLM manually if you are having problems starting DB2 for either of these reasons: v An IDENTIFY or CONNECT to a data sharing group fails. v DB2 experiences a failure that involves the IRLM. When you start the IRLM manually, you can generate a dump to collect diagnostic information because IRLM does not stop automatically.
381
MODIFY irlmproc,SET,DEADLOCK=nnnn Sets the time for the local deadlock detection cycle. MODIFY irlmproc,SET,LTE=nnnn Sets the number of LOCK HASH entries that this IRLM can use on the next connect to the XCF LOCK structure. Use only for data sharing. MODIFY irlmproc,SET,TIMEOUT=nnnn,subsystem-name Sets the timeout value for the specified DB2 subsystem. Display the subsystem-name by using MODIFY irlmproc,STATUS. MODIFY irlmproc,SET,TRACE=nnn Sets the maximum number of trace buffers used for this IRLM.
If that happens, issue the STOP irlmproc command again, when the subsystems are finished with the IRLM. Alternatively, if you must stop the IRLM immediately, enter the following command to force the stop:
MODIFY irlmproc,ABEND,NODUMP
382
Administration Guide
DXR165I KRLM TERMINATED VIA IRLM MODIFY COMMAND. DXR121I KRLM END-OF-TASK CLEANUP SUCCESSFUL - HI-CSA 335K - HI-ACCT-CSA 0K
DB2 abends. An IMS subsystem using the IRLM does not abend and can be reconnected. IRLM uses the z/OS Automatic Restart Manager (ARM) services. However, it de-registers from ARM for normal shutdowns. IRLM registers with ARM during initialization and provides ARM with an event exit. The event exit must be in the link list. It is part of the IRLM DXRRL183 load module. The event exit will make sure that the IRLM name is defined to z/OS when ARM restarts IRLM on a target z/OS that is different from the failing z/OS. The IRLM element name used for the ARM registration depends on the IRLM mode. For local mode IRLM, the element name is a concatenation of the IRLM subsystem name and the IRLM ID. For global mode IRLM, the element name is a concatenation of the IRLM data sharing group name, IRLM subsystem name, and the IRLM ID. IRLM de-registers from ARM when one of the following events occurs: v PURGE irlmproc is issued. v MODIFY irlmproc,ABEND,NODUMP is issued. v DB2 automatically stops IRLM. The command MODIFY irlmproc,ABEND,NODUMP specifies that IRLM de-register from ARM before terminating, which prevents ARM from restarting IRLM. However, it does not prevent ARM from restarting DB2, and, if you set the automatic restart manager to restart IRLM, DB2 automatically starts IRLM.
Monitoring threads
The DB2 command DISPLAY THREAD displays current information about the status of threads, including information about: v Threads that are processing locally v Threads that are processing distributed requests v Stored procedures or user-defined functions if the thread is executing one of those v Parallel tasks Threads can be active or pooled: v An active allied thread is a thread that is connected to DB2 from TSO, BATCH, IMS, CICS, CAF or RRSAF. v An active database access thread is a thread connected through a network with another system and performing work on behalf of that system. v A pooled database access thread is an idle thread that is waiting for a new unit of work from a connection to another system to begin. Pooled threads hold no database locks. The output of the command DISPLAY THREAD can also indicate that a system quiesce is in effect as a result of the ARCHIVE LOG command. For more information, see Archiving the log on page 435. The command DISPLAY THREAD allows you to select which type of information you wish to include in the display using one or more of the following standards: v Active, indoubt, postponed abort, or pooled threads
383
v Allied threads associated with the address spaces whose connection-names are specified v Allied threads v Distributed threads v Distributed threads associated with a specific remote location v Detailed information about connections with remote locations v A specific logical unit of work ID (LUWID). The information returned by the DISPLAY THREAD command reflects a dynamic status. By the time the information is displayed, it is possible that the status could have changed. Moreover, the information is consistent only within one address space and is not necessarily consistent across all address spaces. To use the TYPE, LOCATION, DETAIL, and LUWID keywords, you must have SYSOPR authority or higher. For detailed information, see Chapter 2 of DB2 Command Reference. Example: The DISPLAY THREAD command displays information about active and pooled threads in the following format:
DSNV401I - DISPLAY THREAD REPORT FOLLOWS: DSNV402I - ACTIVE THREADS: NAME ST A REQ ID AUTHID PLAN ASID conn-name s * req-ct corr-id auth-id pname asid conn-name s * req-ct corr-id auth-id pname asid DISPLAY ACTIVE REPORT COMPLETE DSN9022I - module_name '-DISPLAY THREAD' NORMAL COMPLETION
Example: The DISPLAY THREAD command displays information about indoubt threads in the following format:
DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV406I - INDOUBT THREADS COORDINATOR STATUS RESET URID coordinator-name status yes/no urid DISPLAY INDOUBT REPORT COMPLETE DSN9022I - DSNVDT '-DISPLAY THREAD' NORMAL COMPLETION
AUTHID authid
Example: The DISPLAY THREAD command displays information about postponed aborted threads in the following format:
DSNV401I ! DISPLAY THREAD REPORT FOLLOWS DSNV431I ! POSTPONED ABORT THREADS COORDINATOR STATUS RESET URID coordinator-name ABORT-P uridauthid DISPLAY POSTPONED ABORT REPORT COMPLETE DSN9022I ! DSNVDT '-DISPLAY THREAD' NORMAL COMPLETION
AUTHID
More information about how to interpret this output can be found in the sections describing the individual connections and in the description of message DSNV408I in Part 2 of DB2 Messages.
384
Administration Guide
The parameters are optional, and have the following meanings: subsystemid Is the subsystem ID of the DB2 subsystem to be connected n1 n2 Is the number of times to attempt the connection if DB2 is not running (one attempt every 30 seconds) Is the DSN tracing system control that can be used if a problem is suspected
For example, this invokes a DSN session, requesting 5 retries at 30-second intervals:
DSN SYSTEM (DB2) RETRY (5)
DB2I invokes a DSN session when you select any of these operations: v SQL statements using SPUFI v DCLGEN v BIND/REBIND/FREE v RUN v DB2 commands v Program preparation and execution In carrying out those operations, the DB2I panels invoke CLISTs, which start the DSN session and invoke appropriate subcommands.
385
Table 94. Differences in display thread information for TSO and batch (continued) Connection Notes: 1. After the application has connected to DB2 but before a plan has been allocated, this field is blank. Name AUTHID Corr-ID1 Plan1
The name of the connection can have one of the following values: Name TSO BATCH DB2CALL Connection to Program running in TSO foreground Program running in TSO background Program using the call attachment facility and running in the same address space as a program using the TSO attachment facility
The correlation ID, corr-id, is either the foreground authorization ID or the background job name. For a complete description of the DISPLAY THREAD status information displayed, see the description of message DSNV404I in Part 2 of DB2 Messages. The following command displays information about TSO and CAF threads, including those processing requests to or from remote locations:
-DISPLAY THREAD(BATCH,TSO,DB2CALL)
DSNV401I = DISPLAY THREAD REPORT FOLLOWS DSNV402I = ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN 1 BATCH T * 2997 TEP2 SYSADM DSNTEP41 0019 18818 2 BATCH RA * 1246 BINETEP2 SYSADM DSNTEP44 0022 20556 V445-DB2NET.LUND1.AB0C8FB44C4D=20556 ACCESSING DATA FOR SAN_JOSE 3 TSO T 12 SYSADM SYSADM DSNESPRR 0028 5570 4 DB2CALL T * 18472 CAFCOB2 SYSADM CAFCOB2 001A 24979 5 BATCH T * 1 PUPPY SYSADM DSNTEP51 0025 20499 6 PT * 641 PUPPY SYSADM DSNTEP51 002D 20500 7 PT * 592 PUPPY SYSADM DSNTEP51 002D 20501 DISPLAY ACTIVE REPORT COMPLETE DSN9022I = DSNVDT '-DIS THREAD' NORMAL COMPLETION Key: 1 2 3 4 5 6 7 This is a TSO batch application. This is a TSO batch application running at a remote location and accessing tables at this location. This is a TSO online application. This is a call attachment facility application. This is an originating thread for a TSO batch application. This is a parallel thread for the originating TSO batch application thread. This is a parallel thread for the originating TSO batch application thread.
Detailed information for assisting the console operator in identifying threads involved in distributed processing can be found in Monitoring threads on page 383.
386
Administration Guide
You enter:
DSN SYSTEM (DSN)
DSN displays:
DSN
You enter:
RUN PROGRAM (MYPROG)
DSN displays:
DSN
You enter:
END
TSO displays:
READY
387
DSNC STRT Starts the CICS attachment facility. | | | | | | CICS command responses are sent to the terminal from which the corresponding command was entered, unless the DSNC DISPLAY command specifies an alternative destination. For details on specifying alternate destinations for output, see the DSNC DISPLAY command in the DB2 Command Reference. For detailed information about controlling CICS connections, see Defining the CICS DB2 connection in the CICS DB2 Guide.
| |
ssid specifies a DB2 subsystem ID to override that specified in the CICS INITPARM macro. You can also start the attachment facility automatically at CICS initialization using a program list table (PLT). For details, see Part 2 of DB2 Installation Guide.
Restarting CICS
One function of the CICS attachment facility is to keep data in synchronization between the two systems. If DB2 completes phase 1 but does not start phase 2 of the commit process, the units of recovery being committed are termed indoubt. An indoubt unit of recovery might occur if DB2 terminates abnormally after completing phase 1 of the commit process. CICS might commit or roll back work without DB2's knowledge. DB2 cannot resolve those indoubt units of recovery (that is, commit or roll back the changes made to DB2 resources) until the connection to CICS is restarted. This means that CICS should always be auto-started (START=AUTO in the DFHSIT table) to get all necessary information for indoubt thread resolution available from its log. Avoid cold starting. The START option can be specified in the DFHSIT table, as described in CICS Transaction Server for z/OS Resource Definition Guide. If there are CICS requests active in DB2 when a DB2 connection terminates, the corresponding CICS tasks might remain suspended even after CICS is reconnected to DB2. You should purge those tasks from CICS using a CICS-supplied transaction such as:
CEMT SET TASK(nn) FORCE
See CICS Transaction Server for z/OS CICS Supplied Transactions for more information about transactions that CICS supplies. If any unit of work is indoubt when the failure occurs, the CICS attachment facility automatically attempts to resolve the unit of work when CICS is reconnected to DB2. Under some circumstances, however, CICS cannot resolve indoubt units of recovery. You must manually recover these indoubt units of recovery (see Recovering indoubt units of recovery manually on page 389 for more information).
388
Administration Guide
For an explanation of the displayed list, see the description of message DSNV408I in Part 2 of DB2 Messages.
The default value for connection-name is the connection name from which you entered the command. Correlation-id is the correlation ID of the thread to be recovered. It can be determined by issuing the command DISPLAY THREAD. Your choice for the ACTION parameter tells whether to commit or roll back the associated unit of recovery. For more details, see Resolving indoubt units of recovery on page 463. One of the following messages can be used after you use the RECOVER command:
DSNV414I - THREAD correlation-id COMMIT SCHEDULED DSNV415I - THREAD correlation-id ABORT SCHEDULED
For more information about manually resolving indoubt units of recovery, see Manually recovering CICS indoubt units of recovery on page 531. For information about the two-phase commit process, as well as indoubt units of recovery, see Multiple system consistency on page 459.
For an explanation of the displayed list, see the description of message DSNV408I in Part 2 of DB2 Messages.
389
These commands display the threads that the resource or transaction is using. The following information is provided for each created thread: v Authorization ID for the plan associated with the transaction (8 characters). v PLAN/TRAN name (8 characters). v A or I (1 character). If A is displayed, the thread is within a unit of work. If I is displayed, the thread is waiting for a unit of work, and the authorization ID is blank. The following CICS attachment facility command is used to monitor CICS:
DSNC DISPLAY STATISTICS destination
Disconnecting applications
There is no way to disconnect a particular CICS transaction from DB2 without abending the transaction. Two ways to disconnect an application are described here: v The DB2 command CANCEL THREAD can be used to cancel a particular thread. CANCEL THREAD requires that you know the token for any thread you want to cancel. Enter the following command to cancel the thread identified by the token indicated in the display output.
-CANCEL THREAD(46)
When you issue CANCEL THREAD for a thread, that thread is scheduled to be terminated in DB2. v The command DSNC DISCONNECT terminates the threads allocated to a plan ID, but it does not prevent new threads from being created. This command frees DB2 resources shared by the CICS transactions and allows exclusive access to them for special-purpose processes such as utilities or data definition statements. The thread is not canceled until the application releases it for reuse, either at SYNCPOINT or end-of-task.
390
Administration Guide
| |
For complete information about the use of CICS attachment commands with DB2, see CICS DB2 Guide.
Orderly termination
It is recommended that you do orderly termination whenever possible. An orderly termination of the connection allows each CICS transaction to terminate before thread subtasks are detached. This means there should be no indoubt units of recovery at reconnection time. An orderly termination occurs when you: v Enter the DSNC STOP QUIESCE command. CICS and DB2 remain active. v Enter the CICS command CEMT PERFORM SHUTDOWN, and the CICS attachment facility is also named to shut down during program list table (PLT) processing. DB2 remains active. For information about the CEMT PERFORM SHUTDOWN command, see CICS for MVS/ESA CICS-Supplied Transactions. v Enter the DB2 command CANCEL THREAD. The thread is abended. The following example stops the DB2 subsystem (QUIESCE). allows the currently identified tasks to continue normal execution, and does not allow new tasks to identify themselves to DB2:
-STOP DB2 MODE (QUIESCE)
This message appears when the stop process starts and frees the entering terminal (option QUIESCE):
DSNC012I THE ATTACHMENT FACILITY STOP QUIESCE IS PROCEEDING
When the stop process ends and the connection is terminated, this message is added to the output from the CICS job:
DSNC025I THE ATTACHMENT FACILITY IS INACTIVE
Forced termination
Although it is not recommended, there might be times when it is necessary to force the connection to end. A forced termination of the connection can abend CICS transactions connected to DB2. Therefore, indoubt units of recovery can exist at reconnect. A forced termination occurs in the following situations: v You enter the DSNC STOP FORCE command. This command waits 15 seconds before detaching the thread subtasks, and, in some cases, can achieve an orderly termination. DB2 and CICS remain active. v You enter the CICS command CEMT PERFORM SHUTDOWN IMMEDIATE. For information about this command, see CICS for MVS/ESA CICS-Supplied Transactions. DB2 remains active. v You enter the DB2 command STOP DB2 MODE (FORCE). CICS remains active. v A DB2 abend occurs. CICS remains active. v A CICS abend occurs. DB2 remains active. v STOP is issued to the DB2 or CICS attachment facility, and the CICS transaction overflows to the pool. The transaction issues an intermediate commit. The thread is terminated at commit time, and further DB2 access is not allowed. This message appears when the stop process starts and frees the entering terminal (option FORCE):
DSNC022I THE ATTACHMENT FACILITY STOP FORCE IS PROCEEDING
Chapter 17. Monitoring and controlling DB2 and its connections
391
When the stop process ends and the connection is terminated, this message is added to the output from the CICS job:
DSNC025I THE ATTACHMENT FACILITY IS INACTIVE
The message is issued regardless of whether DB2 is active and does not imply that the connection is established. The order of starting IMS and DB2 is not vital. If IMS is started first, then when DB2 comes up, it posts the control region modify task, and IMS again tries to reconnect. If DB2 is stopped by the STOP DB2 command, the /STOP SUBSYS command, or a DB2 abend, then IMS cannot reconnect automatically. You must make the connection by using the /START command.
392
Administration Guide
The following messages can be produced when IMS attempts to connect a DB2 subsystem: v If DB2 is active, these messages are sent: To the z/OS console:
DFS3613I ESS TCB INITIALIZATION COMPLETE
imsid is the IMS connection name. RC=00 means that a notify request has been queued. When DB2 starts, IMS is also notified. No message goes to the z/OS console.
Thread attachment
Execution of the programs first SQL statement causes the IMS attachment facility to create a thread and allocate a plan, whose name is associated with the IMS application program module name. DB2 sets up control blocks for the thread and loads the plan. Using the DB2 command DISPLAY THREAD: The DB2 command DISPLAY THREAD can be used to display IMS attachment facility threads. DISPLAY THREAD output for DB2 connections to IMS differs depending on whether DB2 is connected to a DL/I batch program, a control region, a message-driven program, or a nonmessage-driven program. Table 95 summarizes these differences.
Table 95. Differences in DISPLAY THREAD information for IMS connections Connection DL/I Batch Control Region Message Driven Non-message Driven Notes: 1. After the application has connected to DB2 but before sign-on processing has completed, this field is blank. 2. After sign-on processing has completed but before a plan has been allocated, this field is blank. Name DDITV02 statement IMSID IMSID IMSID AUTHID2 JOBUSER= N/A Signon ID or ltermid AXBUSER or PSBNAME ID1,2 Job Name N/A PST+ PSB PST+ PSB Plan1,2 DDITV02 statement N/A RTT or program RTT or program
The following command displays information about IMS threads, including those accessing data at remote locations:
-DISPLAY THREAD(imsid)
393
DSNV401I -STR DISPLAY THREAD REPORT FOLLOWS DSNV402I -STR ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN 1 SYS3 T * 3 0002BMP255 ADMF001 PROGHR1 0019 99 SYS3 T * 4 0001BMP255 ADMF001 PROGHR2 0018 2 SYS3 N 5 SYSADM 0065 0 DISPLAY ACTIVE REPORT COMPLETE DSN9022I -STR DSNVDT '-DIS THD' NORMAL COMPLETION Key: 1 2 This is a message-driven BMP.
97
This thread has completed sign-on processing, but a DB2 plan has not been allocated.
Thread termination
When an application terminates, IMS invokes an exit routine to disconnect the application from DB2. There is no way to terminate a thread without abending the IMS application with which it is associated. Two ways of terminating an IMS application are described here: v Termination of the application The IMS commands /STOP REGION reg# ABDUMP or /STOP REGION reg# CANCEL can be used to terminate an application running in an online environment. For an application running in the DL/I batch environment, the z/OS command CANCEL can be used. See IMS Command Reference for more information about terminating IMS applications. v Use of the DB2 command CANCEL THREAD CANCEL THREAD can be used to cancel a particular thread or set of threads. CANCEL THREAD requires that you know the token for any thread you want to cancel. Enter the following command to cancel the thread identified by a token in the display output:
-CANCEL THREAD(46)
When you issue CANCEL THREAD for a thread, that thread is scheduled to be terminated in DB2.
394
Administration Guide
DSNV401I -STR DISPLAY THREAD REPORT FOLLOWS DSNV406I -STR POSTPONED ABORTT THREADS - 920 COORDINATOR STATUS RESET URID SYS3 P-ABORT 00017854FF6B V449-HAS NID= SYS3.400000000 AND ID= 0001BMP255 BATCH P-ABORT 00017854A8A0 V449-HAS NID= DSN:0001.0 AND ID= RUNP10 BATCH P-ABORT 00017854AA2E V449-HAS NID= DSN:0002.0 AND ID= RUNP90 BATCH P-ABORT 0001785CD711 V449-HAS NID= DSN:0004.0 AND ID= RUNP12 DISPLAY POSTPONED ABORT REPORT COMPLETE DSN9022I -STR DSNVDT '-DIS THD' NORMAL COMPLETION
For an explanation of the list displayed, see the description of message DSNV408I in Part 2 of DB2 Messages. End of General-use Programming Interface
Here imsid is the connection name and pst#.psbname is the correlation ID listed by the command DISPLAY THREAD. Your choice of the ACTION parameter tells whether to commit or roll back the associated unit of recovery. For more details, see Resolving indoubt units of recovery on page 463. One of the following messages can be used after you issue the RECOVER command:
DSNV414I - THREAD pst#.psbname COMMIT SCHEDULED DSNV415I - THREAD pst#.psbname ABORT SCHEDULED
395
V449-HAS NID= DSN:0001.0 AND ID= RUNP10 BATCH P-ABORT 00017854AA2E ADMF001 V449-HAS NID= DSN:0002.0 AND ID= RUNP90 BATCH P-ABORT 0001785CD711 ADMF001 V449-HAS NID= DSN:0004.0 AND ID= RUNP12 DISPLAY POSTPONED ABORT REPORT COMPLETE DSN9022I -STR DSNVDT '-DIS THD' NORMAL COMPLETION
For an explanation of the displayed list, see the description of messages in Part 2 of DB2 Messages. End of General-use Programming Interface
396
Administration Guide
v If DB2 is not operational, IMS has RREs that cannot be resolved until DB2 is operational. Those are not a problem. v If DB2 is operational and connected to IMS, and if IMS rolled back the work that DB2 has committed, the IMS attachment facility issues message DSNM005I. If the data in the two systems must be consistent, this is a problem situation. Its resolution is discussed in Resolution of indoubt units of recovery on page 526. v If DB2 is operational and connected to IMS, RREs can still exist, even though no messages have informed you of this problem. The only way to recognize this problem is to issue the IMS /DISPLAY OASN SUBSYS command after the DB2 connection to IMS has been established. To display the RRE information, issue the command:
/DISPLAY OASN SUBSYS subsystem-name
where nnnn is the originating application sequence number listed in the display. That is the schedule number of the program instance, telling its place in the sequence of invocations of that program since the last cold start of IMS. IMS cannot have two indoubt units of recovery with the same schedule number. Those commands reset the status of IMS; they do not result in any communication with DB2.
397
If DB2 is not active, or if resources are not available when the first SQL statement is issued from an application program, the action taken depends on the error option specified on the SSM user entry. The options are: Option Action R Q A The appropriate return code is sent to the application, and the SQL code is returned. The application is abended. This is a PSTOP transaction type; the input transaction is re-queued for processing and new transactions are queued. The application is abended. This is a STOP transaction type; the input transaction is discarded and new transactions are not queued.
The region error option can be overridden at the program level via the resource translation table (RTT). See Part 2 of DB2 Installation Guide for further details.
From IMS:
/SSR -DISPLAY THREAD (imsid)
For an explanation of the DISPLAY THREAD status information displayed, see the description of message DSNV404I in Part 2 of DB2 Messages. More detailed information regarding use of this command and the reports it produces is available in The command DISPLAY THREAD on page 409. IMS provides a display command to monitor the connection to DB2. In addition to showing which program is active on each dependent region connection, the display also shows the LTERM user name and gives the control region connection status. The command is:
/DISPLAY SUBSYS subsystem-name
The connection between IMS and DB2 is shown as one of the following states:
CONNECTED NOT CONNECTED CONNECT IN PROGRESS STOPPED
398
Administration Guide
STOP IN PROGRESS INVALID SUBSYSTEM NAME=name SUBSYSTEM name NOT DEFINED BUT RECOVERY OUTSTANDING
The thread status from each dependent region is show as one of the following states:
CONN CONN, ACTIVE (includes LTERM of user)
The following four examples show the output that might be generated when an IMS /DISPLAY SUBSYS command is issued. Figure 34 shows the output that is returned for a DSN subsystem that is not connected. The IMS attachment facility issues message DSNM003I in this example.
0000 0000 0000 0000 0000 0000 0000 15.49.57 15.49.57 15.49.57 15.49.57 15.49.57 15.49.57 15.49.57 R 45,/DIS SUBSYS NEW IEE600I REPLY TO 45 IS;/DIS SUBSYS END DFS000I DSNM003I IMS/TM V1 SYS3 FAILED TO CONNECT TO SUBSYSTEM DSN RC=00 SYS3 DFS000I SUBSYS CRC REGID PROGRAM LTERM STATUS SYS3 DFS000I DSN : NON CONN SYS3 DFS000I *83228/154957* SYS3 *46 DFS996I *IMS READY* SYS3
56 56 56 56 56
Figure 35 shows the output that is returned for a DSN subsystem that is connected. The IMS attachment facility issues message DSNM001I in this example.
0000 0000 0000 0000 0000 0000 0000 0000 15.58.59 15.58.59 15.59.01 15.59.01 15.59.01 15.59.01 15.59.01 15.59.01 R 46,/DIS SUBSYS ALL IEE600I REPLY TO 46 IS;/DIS SUBSYS ALL DFS551I MESSAGE REGION MPP1 STARTED ID=0001 TIME=1551 CLASS=001,002,003,004 DFS000I DSNM001I IMS/TM=V1 SYS3 CONNECTED TO SUBSYSTEM DSN SYS3 DFS000I SUBSYS CRC REGID PROGRAM LTERM STATUS SYS3 DFS000I DSN : CONN SYS3 DFS000I *83228/155900* SYS3 *47 DFS996I *IMS READY* SYS3
56 56 56 56 56 56
Figure 36 shows the output that is returned for a DSN subsystem that is in a stopped status. The IMS attachment facility issues message DSNM002I in this example.
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 15.59.28 15.59.28 15.59.37 15.59.37 15.59.44 15.59.44 15.59.45 15.59.45 15.59.45 15.59.45 15.59.45 R 47,/STO SUBSYS ALL IEE600I REPLY TO 47 IS;/STO SUBSYS ALL DFS058I 15:59:37 STOP COMMAND IN PROGRESS SYS3 *48 DFS996I *IMS READY* SYS3 R 48,/DIS SUBSYS ALL IEE600I REPLY TO 48 IS;/DIS SUBSYS ALL DFS000I DSNM002I IMS/TM V1 SYS3 DISCONNECTED FROM SUBSYSTEM DSN RC = E. SYS3 DFS000I SUBSYS CRC REGID PROGRAM LTERM STATUS SYS3 DFS000I DSN : STOPPED SYS3 DFS000I *83228/155945* SYS3 *49 DFS996I *IMS READY* SYS3
56 56 56 56 56 56 56
Figure 36. Example of output from the IMS /DISPLAY SUBSYS command
Figure 37 on page 400 shows the output that is returned for a DSN subsystem that is connected and region 1. You can use the values from the REGID and the PROGRAM fields to correlate the output of the command to the LTERM that is involved.
399
56 56 56 56 56 56 56 56
R 59,/DIS SUBSYS ALL IEE600I REPLY TO 59 IS;/DIS SUBSYS ALL DFS000I SUBSYS CRC REGID PROGRAM LTERM DFS000I DSN : DFS000I 1 DFS000I *83228/160938* SYS3 *60 DFS996I *IMS READY* SYS3
Figure 37. Example of output from IMS /DISPLAY SUBSYS processing for a DSN subsystem that is connected and the region ID (1) that is included.
That command sends the following message to the terminal that entered it, usually the master terminal operator (MTO):
DFS058I STOP COMMAND IN PROGRESS
The /START SUBSYS subsystem-name command is required to reestablish the connection. In implicit or explicit disconnect, this message is sent to the IMS master terminal:
DSNM002I IMS/TM imsid DISCONNECTED FROM SUBSYSTEM subsystem-name - RC=z
That message uses the following reason codes (RC): Code A B C D Meaning IMS/TM is terminating normally (for instance, /CHE FREEZEDUMPQPURGE). Connected threads complete. IMS is abending. Connected threads are rolled back. DB2 data is backed out now; DL/I data is backed out on IMS restart. DB2 is terminating normally after a STOP DB2 MODE (QUIESCE) command. Connected threads complete. DB2 is terminating normally after a STOP DB2 MODE (FORCE) command, or DB2 is abending. Connected threads are rolled back. DL/I data is backed out now. DB2 data is backed out now if DB2 terminated normally; otherwise, at restart. IMS is ending the connection because of a /STOP SUBSYS subsystem-name command. Connected threads complete.
If an application attempts to access DB2 after the connection ended and before a thread is established, the attempt is handled according to the region error option specification (R, Q, or A).
400
Administration Guide
401
For RRSAF connections, a network ID is the z/OS RRS Unit of Recovery ID (URID) that uniquely identifies a unit of work. A z/OS RRS URID is a 32 character number. For an explanation of the output, see the description of message DSNV408I in Part 2 of DB2 Messages.
or
-RECOVER INDOUBT (RRSAF) ACTION (ABORT) ID (correlation-id)
correlation-id is the correlation ID of the thread to be recovered. You can determine the correlation ID by issuing the DISPLAY THREAD command. The ACTION parameter indicates whether to commit or roll back the associated unit of recovery. For more details, see Resolving indoubt units of recovery on page 463. If you recover a thread that is part of a global transaction, all threads in the global transaction are recovered. The following messages can occur when you issue the RECOVER INDOUBT command:
DSNV414I - THREAD correlation-id COMMIT SCHEDULED DSNV415I - THREAD correlation-id ABORT SCHEDULED
402
Administration Guide
where nid is the 32-character field displayed in the DSNV449I message. For information about the two-phase commit process, as well as indoubt units of recovery, see Multiple system consistency on page 459.
For RRSAF connections, a network ID is the z/OS RRS Unit of Recovery ID (URID) that uniquely identifies a unit of work. A z/OS RRS URID is a 32-character number. For an explanation of the output, see the description of message DSNV408I in Part 2 of DB2 Messages.
403
DSNV401I = DISPLAY THREAD REPORT FOLLOWS DSNV402I = ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN 1 RRSAF T 4 RRSTEST2-111 ADMF001 ?RRSAF 0024 13 2 RRSAF T 6 RRSCDBTEST01 USRT001 TESTDBD 0024 63 3 RRSAF DI 3 RRSTEST2-100 USRT002 ?RRSAF 001B 99 4 RRSAF TR 9 GT01XP05 SYSADM TESTP05 001B 235 V444-DB2NET.LUND0.AA8007132465=16 ACCESSING DATA AT V446-SAN_JOSE:LUND1 DISPLAY ACTIVE REPORT COMPLETE Key: 1 2 3 4 This is an application that used CREATE THREAD to allocate the special plan used by RRSAF (plan name = ?RRSAF). This is an application that connected to DB2 and allocated a plan with the name TESTDBD. This is an application that is currently not connected to a TCB (shown by status DI). This is an active connection that is running plan TESTP05. The thread is accessing data at a remote site.
When you issue CANCEL THREAD, DB2 schedules the thread for termination.
404
Administration Guide
Related information: The following topics in this book contain information about distributed connections: Resolving indoubt units of recovery on page 463 Database access thread failure recovery on page 560 Chapter 36, Tuning and monitoring in a distributed environment, on page 1007
When DDF is started and is responsible for indoubt thread resolution with remote partners, one or both of messages DSNL432I and DSNL433I is generated. These messages summarize DDFs responsibility for indoubt thread resolution with remote partners. See Chapter 20, Maintaining consistency across multiple systems, on page 459 for information about resolving indoubt threads. Using the START DDF command requires authority of SYSOPR or higher. The following messages are associated with this command:
DSNL003I - DDF IS STARTING DSNL004I - DDF START COMPLETE LOCATION locname LU netname.luname GENERICLU netname.gluname DOMAIN domain TCPPORT tcpport RESPORT resport
If the distributed data facility has not been properly installed, the START DDF command fails, and message DSN9032I, - REQUESTED FUNCTION IS NOT AVAILABLE is issued. If the distributed data facility has already been started, the START DDF command fails, and message DSNL001I, - DDF IS ALREADY STARTED is issued. Use the DISPLAY DDF command to display the status of DDF. When you install DB2, you can request that the distributed data facility start automatically when DB2 starts. For information about starting the distributed data facility automatically, see Part 2 of DB2 Installation Guide.
405
server threads, issue the START DDF command. For more detailed information about the STOP DDF MODE(SUSPEND) command, see Chapter 2 of DB2 Command Reference.
DB2 returns output similar to this sample when DDF is not started:
DSNL080I DSNL081I DSNL082I DSNL083I DSNL084I DSNL085I DSNL086I DSNL086I DSNL099I - DSNLTDDF DISPLAY DDF REPORT FOLLOWS1 STATUS=STOPDQ 2 LOCATION 3 LUNAME SVL650A -NONE.SYEC650A 5 IPADDR 6 TCPPORT -NONE 447 8 SQL DOMAIN=-NONE 9 RESYNC DOMAIN=-NONE DSNLTDDF DISPLAY DDF REPORT COMPLETE 4 GENERICLU -NONE 7 RESPORT 5002
Key 1 2 3 4
Description The status of the distributed data facility (DDF) The location name of DDF defined in the BSDS The fully qualified LU name for DDF (that is, the network ID and LUNAME) The fully qualified generic LU name for DDF
406
Administration Guide
5 6 7 8 9
The IP address of DDF The SQL listener TCP/IP port number The two-phase commit resynchronization (resync) listener TCP/IP port number The domain that accepts inbound SQL requests from remote partners The domain that accepts inbound two-phase commit resynchronization requests
Key 10
# #
11 12 13 14 15 16 17 18
Description The DDF thread value: A Indicates that DDF is configured with DDF THREADS ACTIVE I Indicates that DDF is configured with DDF THREADS INACTIVE The maximum number of inbound connections for database access threads The maximum number of concurrent active DBATs that can execute SQL The current number of active database access threads The total number of queued database access threads. This count is cumulative and resets only when DB2 restarts. The current number of inactive DBATs (type 1 inactive threads) The current number of connection requests that are queued and waiting The current number of pooled database access threads The current number of inactive connections (type 2 inactive threads)
For more DISPLAY DDF message information, see Part 2 of DB2 Messages. The DISPLAY DDF DETAIL command is especially useful because it reflects the presence of new inbound connections that are not reflected by other commands. For example, if DDF is in INACTIVE MODE, as denoted by a DT value of I in the DSNL090I message, and DDF is stopped, suspended, or the maximum number of active data base access threads has been reached, then new inbound connections are not yet reflected in the DISPLAY THREAD report. However, the presence of these new connections is reflected in the DISPLAY DDF DETAIL report, although specific details regarding the origin of the connections, such as the client system IP address or LU name, are not available until the connections are actually associated with a database access thread.
407
You can use an asterisk (*) in place of the end characters of a location name. For example, use DISPLAY LOCATION(SAN*) to display information about all active connections between your DB2 and a remote location that begins with SAN. This includes the number of conversations and the role for each non-system conversation, requester or server. When DB2 connects with a remote location, information about that location, including LOCATION, PRDID and LINKNAME (LUNAME or IP address), persists in the report even if no active connections exist. The DISPLAY LOCATION command displays the following types of information for each DBMS that has active threads, except for the local subsystem: v The location name (or RDB_NAME) of the other connected system. If the RDBNAME is not known, the LOCATION column contains one of the following identifiers: A VTAM LU name in this format: <luname>. A dotted decimal IP address in this format: nnn.nnn.nnn.nnn. v The PRDID, which identifies the database product at the location in the form nnnvvrrm: nnn - identifies the database product vv - product version rr - product release m - product modification level. v The corresponding LUNAME or IP address of the system. v The number of threads at the local system that are requesting data from the remote system. v The number of threads at the local system that are acting as a server to the remote system. v The total number of conversations in use between the local system and the remote system. For USIBMSTODB23, in the preceding sample output, the locations are connected and system conversations have been allocated, but currently there are no active threads between the two sites. DB2 does not receive a location name from non-DB2 requesting DBMSs that are connected to DB2. In this case, it displays instead the LUNAME of the requesting DBMS, enclosed in less-than (<) and greater-than (>) symbols. For example, suppose there are two threads at location USIBMSTODB21. One is a distributed access thread from a non-DB2 DBMS, and the other is an allied thread going from USIBMSTODB21 to the non-DB2 DBMS. The DISPLAY LOCATION command issued at USIBMSTODB21 displays the following output:
DSNL200I - DISPLAY LOCATION REPORT FOLLOWS LOCATION PRDID LINKNAME REQUESTERS SERVERS CONVS NONDB2DBMS LUND1 1 0 1 <LULA> DSN04010 LULA 0 1 1 DISPLAY LOCATION REPORT COMPLETE
408
Administration Guide
The following output shows the result of a DISPLAY LOCATION(*) command when DB2 is connected to the following DRDA partners: v DB2A is connected to this DB2, using TCP/IP for DRDA connections and SNA for DB2 private protocol connections. v DB2SERV is connected to this DB2 using only SNA.
DSNL200I - DISPLAY LOCATION REPORT FOLLOWS LOCATION PRDID LINKNAME REQUESTERS SERVERS CONVS DB2A DSN05010 LUDB2A 3 4 9 DB2A DSN05010 124.38.54.16 2 1 3 DB2SERV DSN04010 LULA 1 1 3 DISPLAY LOCATION REPORT COMPLETE
The DISPLAY LOCATION command displays information for each remote location that currently is, or once was, in contact with DB2. If a location is displayed with zero conversations, one of the following conditions exist: v Sessions currently exist with the partner location but there are currently no active conversations allocated to any of the sessions. v Sessions no longer exist with the partner because contact with the partner has been lost. If you use the DETAIL parameter, each line is followed by information about conversations owned by DB2 system threads, including those used for resynchronization of indoubt units of work.
409
DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV402I - ACTIVE THREADS NAME ST 1 A 2 REQ ID AUTHID PLAN ASID TOKEN SERVER RA * 2923 DB2BP ADMF001 DISTSERV 0036 20 3 V437-WORKSTATION=ARRAKIS, USERID=ADMF001, APPLICATION NAME=DB2BP V436-PGM=NULLID.SQLC27A4, SEC=201, STMNT=210 V445-09707265.01BE.889C28200037=203 ACCESSING DATA FOR 9.112.12.101 V447-LOCATION SESSID A ST TIME V448-9.112.12.101 4 446:1300 5 W S2 9802812045091 DISPLAY ACTIVE REPORT COMPLETE DSN9022I - DSNVDT '-DIS THD' NORMAL COMPLETION Key 1 Description The ST (status) column contains characters that indicate the connection status of the local site. The TR indicates that an allied, distributed thread has been established. The RA indicates that a distributed thread has been established and is in receive mode. The RD indicates that a distributed thread is performing a remote access on behalf of another location (R) and is performing an operation involving DCE services (D). Currently, DB2 supports the optional use of DCE services to authenticate remote users. The A (active) column contains an asterisk indicating that the thread is active within DB2. It is blank when the thread is inactive within DB2 (active or waiting within the application). This LUWID is unique across all connected systems. This thread has a token of 20 (it appears in two places in the display output). This is the location of the data that the local application is accessing. If the RDBNAME is not known, the location column contains either a VTAM LUNAME or a dotted decimal IP address. If the connection uses TCP/IP, the sessid column contains local:remote, where local specifies DB2s TCP/IP port number and remote specifies the partners TCP/IP port number.
2 3 4
For more information about this sample output and connection status codes, see message DSNV404I, DSNV444I, and DSNV446I, in Part 2 of DB2 Messages. Displaying information for non-DB2 locations: Because DB2 does not receive a location name from non-DB2 locations, you must enter the LUNAME or IP address of the location for which you want to display information. The LUNAME is enclosed by the less-than (<) and greater-than (>) symbols. The IP address is in the dotted decimal format. For example, if you wanted to display information about a non-DB2 DBMS with the LUNAME of LUSFOS2, you would enter the following command:
-DISPLAY THREAD (*) LOCATION (<LUSFOS2>)
DB2 uses the <LUNAME> notation or dotted decimal format in messages displaying information about non-DB2 requesters. Displaying conversation-level information about threads: Use the DETAIL keyword with the LOCATION keyword to give you information about conversation activity when distribution information is displayed for active threads. This keyword has no effect on the display of indoubt threads. See Chapter 2 of DB2 Command Reference for more information about the DETAIL keyword. For example, issue:
-DISPLAY THREAD(*) LOCATION(*) DETAIL
DB2 returns the following message, indicating that the local site application is waiting for a conversation to be allocated in DB2, and a DB2 server that is accessed by a DRDA client using TCP/IP.
410
Administration Guide
DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV402I - ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN TSO TR * 3 SYSADM SYSADM DSNESPRR 002E 2 V436-PGM=DSNESPRR.DSNESM68, SEC=1, STMNT=116 V444-DB2NET.LUND0.A238216C2FAE=2 ACCESSING DATA AT V446-USIBMSTODB22:LUND1 V447--LOCATION SESSID 1 A ST TIME V448--USIBMSTODB22 0000000000000000 V A1 2 9015816504776 TSO RA * 11 SYSADM SYSADM DSNESPRR 001A 15 V445-STLDRIV.SSLU.A23555366A29=15 ACCESSING DATA FOR 123.34.101.98 V447--LOCATION SESSID A ST TIME V448--123.34.101.98 446:3171 3 S2 9015611253108 DISPLAY ACTIVE REPORT COMPLETE DSN9022I - DSNVDT '-DIS THD' NORMAL COMPLETION Key 1 Description The information on this line is part of message DSNV447I. The conversation A (active) column for the server is useful in determining when a DB2 thread is hung and whether processing is waiting in VTAM or in DB2. A value of W indicates that the thread is suspended in DB2 and is waiting for notification from VTAM that the event has completed. A value ofV indicates that control of the conversation is in VTAM. The information on this line is part of message DSNV448I. The A in the conversation ST (status) column for a serving site indicates a conversation is being allocated in DB2. The 1 indicates that the thread uses DB2 private protocol access. A 2 would indicate DRDA access. An R in the status column would indicate that the conversation is receiving or waiting to receive a request or reply. An S in this column for a server indicates that the application is sending or preparing to send a request or reply. The information on this line is part of message DSNV448I. The SESSID column has changed as follows. If the connection uses VTAM, the SESSID column contains a VTAM session identifier. If the connection uses TCP/IP, the sessid column contains local:remote, where local specifies the DB2 TCP/IP port number, and remote specifies the partners TCP/IP port number.
For more DISPLAY THREAD message information, see messages DSNV447I and DSNV448I, Part 2 of DB2 Messages. Monitoring all DBMSs in a transaction: The DETAIL keyword of the command DISPLAY THREAD allows you to monitor all of the requesting and serving DBMSs involved in a transaction. For example, you could monitor an application running at USIBMSTODB21 requesting information from USIBMSTODB22, which must establish conversations with secondary servers USIBMSTODB23 and USIBMSTODB24 to provide the requested information. Figure 39 on page 412 depicts such an example. In this example, ADA refers to DRDA access and SDA refers to DB2 private protocol access. USIBMSTODB21 is considered to be upstream from USIBMSTODB22. USIBMSTODB22 is considered to be upstream from USIBMSTODB23. Conversely, USIBMSTODB23 and USIBMSTODB22 are downstream from USIBMSTODB22 and USIBMSTODB21 respectively.
411
The application running at USIBMSTODB21 is connected to a server at USIBMSTODB22, using DRDA access. If you enter the DISPLAY THREAD command with the DETAIL keyword from USIBMSTODB21, you receive the output that Figure 40 depicts.
-DIS THD(*) LOC(*) DET DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV402I - ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN BATCH TR * 6 BKH2C SYSADM YW1019C 0009 2 V436-PGM=BKH2C.BKH2C, SEC=1, STMNT=4 V444-USIBMSY.SSLU.A23555366A29=2 ACCESSING DATA AT V446-USIBMSTODB22:SSURLU V447--LOCATION SESSID A ST TIME V448--USIBMSTODB22 0000000300000004 V R2 9015611253116 DISPLAY ACTIVE REPORT COMPLETE 11:26:23 DSN9022I - DSNVDT '-DIS THD' NORMAL COMPLETION
This output indicates that the application is waiting for data to be returned by the server at USIBMSTODB22. The server at USIBMSTODB22 is running a package on behalf of the application at USIBMSTODB21, in order to access data at USIBMSTODB23 and USIBMSTODB24 by DB2 private protocol access. If you enter the DISPLAY THREAD command with the DETAIL keyword from USIBMSTODB22, you receive the output that Figure 41depicts.
-DIS THD(*) LOC(*) DET DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV402I - ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN BATCH RA * 0 BKH2C SYSADM YW1019C 0008 2 V436-PGM=BKH2C.BKH2C, SEC=1, STMNT=4 V445-STLDRIV.SSLU.A23555366A29=2 ACCESSING DATA FOR USIBMSTODB21:SSLU V444-STLDRIV.SSLU.A23555366A29=2 ACCESSING DATA AT V446-USIBMSTODB23:OSSLU USIBMSTODB24:OSSURLU V447--LOCATION SESSID A ST TIME V448--USIBMSTODB21 0000000300000004 S2 9015611253108 V448--USIBMSTODB23 0000000600000002 S1 9015611253077 V448--USIBMSTODB24 0000000900000005 V R1 9015611253907 DISPLAY ACTIVE REPORT COMPLETE 11:26:34 DSN9022I - DSNVDT '-DIS THD' NORMAL COMPLETION
This output indicates that the server at USIBMSTODB22 is waiting for data to be returned by the secondary server at USIBMSTODB24.
412
Administration Guide
The secondary server at USIBMSTODB23 is accessing data for the primary server at USIBMSTODB22. If you enter the DISPLAY THREAD command with the DETAIL keyword from USIBMSTODB23, you receive the output that Figure 42 depicts.
-DIS THD(*) LOC(*) DET DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV402I - ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN BATCH RA * 2 BKH2C SYSADM YW1019C 0006 1 V445-STLDRIV.SSLU.A23555366A29=1 ACCESSING DATA FOR USIBMSTODB22:SSURLU V447--LOCATION SESSID A ST TIME V448--USIBMSTODB22 0000000600000002 W R1 9015611252369 DISPLAY ACTIVE REPORT COMPLETE 11:27:25 DSN9022I - DSNVDT '-DIS THD' NORMAL COMPLETION
This output indicates that the secondary server at USIBMSTODB23 is not currently active. The secondary server at USIBMSTODB24 is also accessing data for the primary server at USIBMSTODB22. If you enter the DISPLAY THREAD command with the DETAIL keyword from USIBMSTODB24, you receive the output that Figure 43:
-DIS THD(*) LOC(*) DET DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV402I - ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN BATCH RA * 2 BKH2C SYSADM YW1019C 0006 1 V436-PGM=*.BKH2C, SEC=1, STMNT=1 V445-STLDRIV.SSLU.A23555366A29=1 ACCESSING DATA FOR USIBMSTODB22:SSURLU V447--LOCATION SESSID A ST TIME V448--USIBMSTODB22 0000000900000005 S1 9015611253075 DISPLAY ACTIVE REPORT COMPLETE 11:27:32 DSN9022I - DSNVDT '-DIS THD' NORMAL COMPLETION
This output indicates that the secondary server at USIBMSTODB24 is currently active. The conversation status might not change for a long time. The conversation could be hung, or the processing could just be taking a long time. To see whether the conversation is hung, issue DISPLAY THREAD again and compare the new timestamp to the timestamps from previous output messages. If the timestamp is changing, but the status is not, the job is still processing. If you need to terminate a distributed job, perhaps because it is hung and has been holding database locks for a long period of time, you can use the CANCEL DDF THREAD command if the thread is in DB2 (whether active or suspended) or the VARY NET TERM command if the thread is within VTAM. See The command CANCEL THREAD on page 415. Displaying threads by LUWIDs: Use the LUWID optional keyword, which is only valid when DDF has been started, to display threads by logical unit of work identifiers. The LUWIDs are assigned to the thread by the site that originated the thread. You can use an asterisk (*) in an LUWID as in a LOCATION name. For example, use -DISPLAY THREAD TYPE(INDOUBT) LUWID(NET1.*) to display all the indoubt threads whose LUWID has a network name of NET1. The command
Chapter 17. Monitoring and controlling DB2 and its connections
413
DISPLAY THREAD TYPE(INDOUBT) LUWID(IBM.NEW*) displays all indoubt threads whose LUWID has a network name of IBM and whose LUNAME begins with NEW. The DETAIL keyword can also be used with the DISPLAY THREAD LUWID command to show the status of every conversation connected to each thread displayed and to indicate whether a conversation is using DRDA access or DB2 private protocol access. To issue this command, enter:
-DIS THD(*) LUWID (luwid) DETAIL
| | | | | | | | | | | | | | | | | |
414
Administration Guide
Alternatively, you can use the following version of the command with either the token or LUW ID:
-CANCEL DDF THREAD (token or luwid)
The token is a 1-character to 5-character number that identifies the thread. When DB2 schedules the thread for termination, you will see the following message for a distributed thread:
DSNL010I - DDF THREAD token or luwid HAS BEEN CANCELED
For more information about CANCEL THREAD, see Chapter 2 of DB2 Command Reference. Diagnostic dumps: CANCEL THREAD allows you to specify that a diagnostic dump be taken. For more detailed information about diagnosing DDF failures, see Part 3 of DB2 Diagnosis Guide and Reference. Messages: As a result of entering CANCEL THREAD, the following messages can be displayed: DSNL009I DSNL010I
Chapter 17. Monitoring and controlling DB2 and its connections
415
DSNL022I
2. Record positions 3 through 16 of SESSID for the threads to be canceled. (In the preceding DISPLAY THREAD output, the values are D3590EA1E89701 and D3590EA1E89822.) 3. Issue the VTAM command DISPLAY NET to display the VTAM session IDs (SIDs). The ones you want to cancel match the SESSIDs in positions 3 through 16. In Figure 46, the corresponding session IDs (DSD3590EA1E89701 and D2D3590EA1E89822) are shown in bold.
D NET,ID=LUND0,SCOPE=ACT IST097I DISPLAY ACCEPTED IST075I NAME = LUND0, TYPE = APPL IST486I STATUS= ACTIV, DESIRED STATE= ACTIV IST171I ACTIVE SESSIONS = 0000000010, SESSION IST206I SESSIONS: IST634I NAME STATUS SID IST635I LUND1 ACTIV-S D24B171032B76E65 IST635I LUND1 ACTIV-S D24B171032B32545 IST635I LUND1 ACTIV-S D24B171032144565 IST635I LUND1 ACTIV-S D24B171032B73465 IST635I LUND1 ACTIV-S D24B171032B88865 IST635I LUND1 ACTIV-R D2D3590EA1E89701 IST635I LUND1 ACTIV-R D2D3590EA1E89802 IST635I LUND1 ACTIV-R D2D3590EA1E89809 IST635I LUND1 ACTIV-R D2D3590EA1E89821 IST635I LUND1 ACTIV-R D2D3590EA1E89822 IST314I END
REQUESTS = 0000 SEND 0051 0051 0051 0051 0051 0022 0022 0022 0022 0022 RECV 0043 0043 0043 0043 0043 0031 0031 0031 0031 0031
4. Issue the VTAM command VARY NET,TERM SID= for each of the VTAM SIDs associated with the DB2 thread. For more information about VTAM commands, see VTAM for MVS/ESA Operation.
416
Administration Guide
| | | | | | | | | | | | | | | | | | | | |
DSNX940I csect - DISPLAY PROCEDURE REPORT FOLLOWS------ SCHEMA=PAYROLL PROCEDURE STATUS PAYRPRC1 STARTED PAYRPRC2 STOPQUE PAYRPRC3 STARTED USERPRC4 STOPREJ ACTIVE 0 0 2 0 QUED 0 5 0 0 MAXQ 1 5 6 1 TIMEOUT 0 3 0 0 FAIL 0 0 0 1 FAIL 1 0 WLM_ENV PAYROLL PAYROLL PAYROLL SANDBOX WLM_ENV HRPROCS HRPROCS
------ SCHEMA=HRPROD PROCEDURE STATUS ACTIVE QUED MAXQ TIMEOUT HRPRC1 STARTED 0 0 1 0 HRPRC2 STOPREJ 0 0 1 0 DISPLAY PROCEDURE REPORT COMPLETE DSN9022I = DSNX9COM '-DISPLAY PROC' NORMAL COMPLETION
417
This example shows two schemas (PAYROLL and HRPROD) that have been accessed by DB2 applications. You can also display information about specific stored procedures. The DB2 command DISPLAY THREAD: This command tells whether: v A thread is waiting for a stored procedure to be scheduled v A thread is executing within a stored procedure Here is an example of DISPLAY THREAD output that shows a thread that is executing a stored procedure:
!display thread(*) det DSNV401I ! DISPLAY THREAD REPORT FOLLOWS DSNV402I ! ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN BATCH SP 3 CALLWLM SYSADM PLNAPPLX 0022 5 V436-PGM=*.MYPROG, SEC=2, STMNT=1 V429 CALLING PROCEDURE=SYSADM .WLMSP , PROC=V61AWLM1, ASID=0085, WLM_ENV=WLMENV1 DISPLAY ACTIVE REPORT COMPLETE DSN9022I ! DSNVDT '-DIS THD' NORMAL COMPLETION
The SP status indicates that the thread is executing within the stored procedure. An SW status indicates that the thread is waiting for the stored procedure to be scheduled. Here is an example of DISPLAY THREAD output that shows a thread that is executing a user-defined function:
!display thd(*) det DSNV401I ! DISPLAY THREAD REPORT FOLLOWS DSNV402I ! ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN BATCH SP 27 LI33FN1 SYSADM DSNTEP3 0021 4 V436-PGM=*.MYPROG, SEC=2, STMNT=1 V429 CALLING FUNCTION =SYSADM .FUNC1 , PROC=V61AWLM1, ASID=0085, WLM_ENV=WLMENV1 DISPLAY ACTIVE REPORT COMPLETE DSN9022I ! DSNVDT '-DISPLAY THD' NORMAL COMPLETION
The z/OS command DISPLAY WLM: Use the command DISPLAY WLM to determine the status of an application environment in which a stored procedure runs. The output from DISPLAY WLM lets you determine whether a stored procedure can be scheduled in an application environment. For example, you can issue this command to determine the status of application environment WLMENV1:
D WLM,APPLENV=WLMENV1
The output tells you that WLMENV1 is available, so WLM can schedule stored procedures for execution in that environment.
418
Administration Guide
In this command, name represents the name of a WLM application environment that is associated with a group of stored procedures. You affect all stored procedures that are associated with the application environment when you refresh the Language Environment. v To stop all stored procedures address spaces that are associated with WLM application environment name, use the following z/OS command:
VARY WLM,APPLENV=name,QUIESCE
v To start all stored procedures address spaces that are associated with WLM application environment name, use the following z/OS command:
VARY WLM,APPLENV=name,RESUME
You also need to use the VARY WLM command with the RESUME option when WLM puts an application environment in the unavailable state. An application environment in which stored procedures run becomes unavailable when WLM detects five abnormal terminations within 10 minutes. When an application environment is in the unavailable state, WLM does not schedule stored procedures for execution in it. See z/OS MVS Planning: Workload Management for more information about the command VARY WLM.
419
You can obtain the dump information by stopping the stored procedures address space in which the stored procedure is running. See Refreshing the environment for stored procedures or user-defined functions on page 419 for information about how to stop and start stored procedures address spaces in the DB2-established and WLM-established environments.
SEL# DOMAIN RESNAME TYPE TIME ALERT DESCRIPTION:PROBABLE CAUSE ( 1) CNM01 AS *RQST 09:58 SOFTWARE PROGRAM ERROR:COMM/REMOTE NODE ( 2) CNM01 AR *SRVR 09:58 SOFTWARE PROGRAM ERROR:SNA COMMUNICATIONS ( 3) CNM01 P13008 CTRL 12:11 LINK ERROR:REMOTE DCE INTERFACE CABLE ( 4) CNM01 P13008 CTRL 12:11 RLSD OFF DETECTED:OUTBOUND LINE ( 5) CNM01 P13008 CTRL 12:11 LINK ERROR:REMOTE DCE INTERFACE CABLE ( 6) CNM01 P13008 CTRL 12:11 LINK ERROR:INBOUND LINE ( 7) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE ( 8) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE ( 9) CNM01 P13008 CTRL 12:10 LINK ERROR:INBOUND LINE (10) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE (11) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE (12) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE (13) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE (14) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE (15) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE PRESS ENTER KEY TO VIEW ALERTS-DYNAMIC OR ENTER A TO VIEW ALERTS-HISTORY ENTER SEL# (ACTION),OR SEL# PLUS M (MOST RECENT), P (PROBLEM), DEL (DELETE)
+ + + + + + + + + + + +
Figure 47. Alerts-static panel in NetView. DDF errors are denoted by the resource name AS (server) and AR (requester). For DB2-only connections, the resource names would be RS (server) and RQ (requester).
To see the recommended action for solving a particular problem, enter the selection number, and then press ENTER. This displays the Recommended Action for Selected Event panel shown in Figure 48 on page 421.
420
Administration Guide
N E T V I E W SESSION DOMAIN: CNM01 OPER2 11/03/89 10:30:06 NPDA-45A * RECOMMENDED ACTION FOR SELECTED EVENT * PAGE 1 OF 1 CNM01 AR 1 AS 2 +--------+ +--------+ DOMAIN RQST --- SRVR +--------+ +--------+ USER CAUSED - NONE
INSTALL CAUSED - NONE FAILURE CAUSED - SNA COMMUNICATIONS ERROR: RCPRI=0008 RCSEC=0001 1 FAILURE OCCURRED ON RELATIONAL DATA BASE USIBMSTODB21 ACTIONS - I008 - PERFORM PROBLEM DETERMINATION PROCEDURE FOR REASON CODE 3 00D31029 2 I168 - FOR RELATIONAL DATA BASE USIBMSTODB22 REPORT THE FOLLOWING LOGICAL UNIT OF WORK IDENTIFIER DB2NET.LUND0.A1283FFB0476.0001 ENTER DM (DETAIL MENU) OR D (EVENT DETAIL)
Figure 48. Recommended action for selected event panel in NetView. In this example, the AR (USIBMSTODB21) is reporting the problem, which is affecting the AS (USIBMSTODB22).
Key 1
Description The system reporting the error. The system reporting the error is always on the left side of the panel. That systems name appears first in the messages. Depending on who is reporting the error, either the LUNAME or the location name is used. The system affected by the error. The system affected by the error is always displayed to the right of the system reporting the error. The affected systems name appears second in the messages. Depending on what type of system is reporting the error, either the LUNAME or the location name is used. If no other system is affected by the error, then this system will not appear on the panel.
DB2 reason code. For information about DB2 reason codes, see Part 3 of DB2 Codes. For diagnostic information, see Part 3 of DB2 Diagnosis Guide and Reference.
For more information about using NetView, see Tivoli NetView for z/OS User's Guide.
Use the QUIESCE option whenever possible; it is the default. With QUIESCE, the STOP DDF command does not complete until all VTAM or TCP/IP requests have completed. In this case, no resynchronization work is necessary when you restart DDF. If there are indoubt units of work that require resynchronization, the QUIESCE option produces message DSNL035I. Use the FORCE option only when you must stop DDF quickly. Restart times are longer if you use FORCE.
421
When DDF is stopped with the FORCE option, and DDF has indoubt thread responsibilities with remote partners, one or both of messages DSNL432I and DSNL433I is generated. DSNL432I shows the number of threads that DDF has coordination responsibility over with remote participants who could have indoubt threads. At these participants, database resources that are unavailable because of the indoubt threads remain unavailable until DDF is started and resolution occurs. DSNL433I shows the number of threads that are indoubt locally and need resolution from remote coordinators. At the DDF location, database resources are unavailable because the indoubt threads remain unavailable until DDF is started and resolution occurs. To force the completion of outstanding VTAM or TCP/IP requests, use the FORCE option, which cancels the threads associated with distributed requests. When the FORCE option is specified with STOP DDF, database access threads in the prepared state that are waiting for the commit or abort decision from the coordinator are logically converted to the indoubt state. The conversation with the coordinator is terminated. If the thread is also a coordinator of downstream participants, these conversations are terminated. Automatic indoubt resolution is initiated when DDF is restarted. See Resolving indoubt units of recovery on page 463 for more information about this topic. The STOP DDF command causes the following messages to appear:
DSNL005I - DDF IS STOPPING DSNL006I - DDF STOP COMPLETE
If the distributed data facility has already been stopped, the STOP DDF command fails and message DSNL002I - DDF IS ALREADY STOPPED appears. Stopping DDF using VTAM commands: Another way to force DDF to stop is to issue the VTAM VARY NET,INACT command. This command makes VTAM unavailable and terminates DDF. VTAM forces the completion of any outstanding VTAM requests immediately. The syntax for the command is as follows:
VARY NET,INACT,ID=db2lu,FORCE
where db2lu is the VTAM LU name for the local DB2 system. When DDF has stopped, the following command must be issued before START DDF can be attempted:
VARY NET,ACT,ID=db2lu
Controlling traces
These traces can be used for problem determination: DB2 trace IMS attachment facility trace CICS trace Three TSO attachment facility traces
422
Administration Guide
CAF trace stream RRS trace stream z/OS component trace used for IRLM
423
When you install DB2, you can request that any trace type and class start automatically when DB2 starts. For information about starting traces automatically, see Part 2 of DB2 Installation Guide. End of General-use Programming Interface
424
Administration Guide
TRACE CT Starts, stops, or modifies a diagnostic trace for IRLM. The TRACE CT command does not know about traces that are started automatically during IRLM startup. Recommendations: v Do not use the external component trace writer to write traces to the data set. v Activate all traces during IRLM startup. Use the command START irlmproc,TRACE=YES to activate all traces. See Chapter 2 of DB2 Command Reference for detailed information.
425
2. Assemble and link-edit the DSNTIJUZ job produced in Step 1, and then submit the job to create the new load module with the new subsystem parameter values. 3. Issue the SET SYSPARM command to change the subsystem parameters dynamically:
SET SYSPARM LOAD(load-module-name)
where you specify the load-module-name to be the same as the output member name in Step 1. If you want to specify the load module name that is used during DB2 start up, you can issue the following command:
SET SYSPARM RELOAD
For more information, see Part 2 of DB2 Installation Guide and Chapter 2 of DB2 Command Reference.
426
Administration Guide
Chapter 18. Managing the log and the bootstrap data set
The DB2 log registers data changes and significant events as they occur. The bootstrap data set (BSDS) is a repository of information about the data sets that contain the log. DB2 writes each log record to a disk data set called the active log. When the active log is full, DB2 copies its contents to a disk or tape data set called the archive log. That process is called offloading. This chapter describes: How database changes are made Establishing the logging environment on page 429 Managing the bootstrap data set (BSDS) on page 441 Discarding archive log records on page 443 For information about the physical and logical records that make up the log, see Appendix C, Reading log records, on page 1115. That appendix also contains information about how to write a program to read log records.
Units of recovery
A unit of recovery is the work, done by a single DB2 DBMS for an application, that changes DB2 data from one point of consistency to another. A point of consistency (also, sync point or commit point) is a time when all recoverable data that an application program accesses is consistent with other data. (For an explanation of maintaining consistency between DB2 and another subsystem such as IMS or CICS, see Multiple system consistency on page 459.) A unit of recovery begins with the first change to the data after the beginning of the job or following the last point of consistency and ends at a later point of consistency. An example of units of recovery within an application program is shown in Figure 49 on page 428.
427
Application process Unit of recovery SQL transaction 1 Time line SQL transaction 2
SQLT1 begins
SQLT1 ends
SQLT2 begins
SQLT2 ends
In this example, the application process makes changes to databases at SQL transaction 1 and 2. The application process can include a number of units of recovery or just one, but any complete unit of recovery ends with a commit point. For example, a bank transaction might transfer funds from account A to account B. First, the program subtracts the amount from account A. Next, it adds the amount to account B. After subtracting the amount from account A, the two accounts are inconsistent. These accounts are inconsistent until the amount is added to account B. When both steps are complete, the program can announce a point of consistency and thereby make the changes visible to other application programs. Normal termination of an application program automatically causes a point of consistency. The SQL COMMIT statement causes a point of consistency during program execution under TSO. A sync point causes a point of consistency in CICS and IMS programs.
Database updates
Begin rollback
428
Administration Guide
The effects of inserts, updates, and deletes to large object (LOB) values are backed out along with all the other changes made during the unit of work being rolled back, even if the LOB values that were changed reside in a LOB table space with the LOG NO attribute. An operator or an application can issue the CANCEL THREAD command with the NOBACKOUT option to cancel long running threads without backing out data changes. DB2 backs out changes to catalog and directory tables regardless of the NOBACKOUT option. As a result, DB2 does not read the log records and does not write or apply the compensation log records. After CANCEL THREAD NOBACKOUT processing, DB2 marks all objects associated with the thread as refresh pending (REFP) and puts the objects in a logical page list (LPL). For information about how to reset the REFP status, see DB2 Utility Guide and Reference. The NOBACKOUT request might fail for either of the following two reasons: v DB2 does not completely back out updates of the catalog or directory (message DSNI032I with reason 00C900CC). v The thread is part of a global transaction (message DSNV439I).
Chapter 18. Managing the log and the bootstrap data set
429
b. The active logs. The bootstrap data set registers which log RBAs apply to each active or archive log data set. If the record is in an active log, DB2 dynamically acquires a buffer, reads one or more CIs, and returns one record for each request. c. The archive logs. DB2 determines which archive volume contains the CIs, dynamically allocates the archive volume, acquires a buffer, and reads the CIs.
430
Administration Guide
Triggering event
Record on BSDS
Triggering offload
An offload of an active log to an archive log can be triggered by several events. The most common are when: v An active log data set is full v Starting DB2 and an active log data set is full v The command ARCHIVE LOG is issued An offload is also triggered by two uncommon events: v An error occurring while writing to an active log data set. The data set is truncated before the point of failure, and the record that failed to write becomes the first record of the next data set. An offload is triggered for the truncated data set as in normal end-of-file. If there are dual active logs, both copies are truncated so the two copies remain synchronized. v Filling of the last unarchived active log data set. Message DSNJ110E is issued, stating the percentage of its capacity in use; IFCID trace record 0330 is also issued if statistics class 3 is active. If all active logs become full, DB2 stops processing until offloading occurs and issues this message:
DSNJ111E - OUT OF SPACE IN ACTIVE LOG DATA SETS
Chapter 18. Managing the log and the bootstrap data set
431
The operator need not respond to message DSNJ008E immediately. However, delaying the response delays the offload process. It does not affect DB2 performance unless the operator delays response for so long that DB2 runs out of active logs. The operator can respond by canceling the offload. In that case, if the allocation is for the first copy of dual archive data sets, the offload is merely delayed until the next active log data set becomes full. If the allocation is for the second copy, the archive process switches to single copy mode, but for the one data set only. Delay of log offload task: When DB2 switches active logs and finds that the offload task has been active since the last log switch, it issues the following message to notify the operator that there might be an outstanding tape mount or some other problem that prevents the offload of the previous active log data set.
DSNJ017E - csect-name WARNING - OFFLOAD TASK HAS BEEN ACTIVE SINCE date-time AND MAY HAVE STALLED
| | | | | | | | | | |
DB2 continues processing. Issue the ARCHIVE LOG CANCEL OFFLOAD command to cancel and restart the offload task. If the offload task remains stalled, the active logs eventually become full and DB2 stops database update activity. To view the status of the offload task, issue the DISPLAY LOG command. Messages returned during offloading: The following messages are sent to the z/OS console by DB2 and the offload process. With the exception of the DSNJ139I message, these messages can be used to find the RBA ranges in the various log data sets. v The following message appears during DB2 initialization when the current active log data set is found, and after a data set switch. During initialization, the STARTRBA value in the message does not refer to the beginning of the data set, but to the position in the log where logging will begin.
DSNJ001I - csect-name CURRENT COPY n ACTIVE LOG DATA SET IS DSNAME=..., STARTRBA=..., ENDRBA=...
v The following message appears when offload reaches end-of-volume or end-of-data-set in an archive log data set: Non-data sharing version is:
DSNJ003I - FULL ARCHIVE LOG VOLUME DSNAME=..., STARTRBA=..., ENDRBA=..., STARTTIME=..., ENDTIME=..., UNIT=..., COPYnVOL=..., VOLSPAN=..., CATLG=...
v The following message appears when one data set of the next pair of active logs is not available because of a delay in offloading, and logging continues on one copy only:
DSNJ004I - ACTIVE LOG COPY n INACTIVE, LOG IN SINGLE MODE, ENDRBA=...
v The following message appears when dual active logging resumes after logging has been carried on with one copy only:
DSNJ005I - ACTIVE LOG COPY n IS ACTIVE, LOG IN DUAL MODE, STARTRBA=...
v The following message indicates that the offload task has ended:
DSNJ139I LOG OFFLOAD TASK ENDED
432
Administration Guide
Interruptions and errors while offloading: Here is how DB2 handles the following interruptions in the offloading process: v The command STOP DB2 does not take effect until offloading is finished. v A DB2 failure during offload causes offload to begin again from the previous start RBA when DB2 is restarted. v Offload handling of read I/O errors on the active log is described under Active log failure recovery on page 535, or write I/O errors on the archive log, under Archive log failure recovery on page 539. v An unknown problem that causes the offload task to hang means that DB2 cannot continue processing the log. This problem might be resolved by retrying the offload, which you can do by using the option CANCEL OFFLOAD of the command ARCHIVE LOG, described in Canceling log off-loads on page 437.
433
catalog request for the first file on the next volume. Though that might appear to be an error in the integrated catalog facility catalog, it causes no problems in DB2 processing. If you choose to offload to tape, consider adjusting the size of your active log data sets such that each set contains the amount of space that can be stored on a nearly full tape volume. That adjustment minimizes tape handling and volume mounts and maximizes the use of tape resources. However, such an adjustment is not always necessary. If you want the active log data set to fit on one tape volume, consider placing a copy of the BSDS on the same tape volume as the copy of the active log data set. Adjust the size of the active log data set downward to offset the space required for the BSDS. Archiving to disk volumes: All archive log data sets allocated on disk must be cataloged. If you choose to archive to disk, then the field CATALOG DATA of installation panel DSNTIPA must contain YES. If this field contains NO, and you decide to place archive log data sets on disk, you receive message DSNJ072E each time an archive log data set is allocated, although the DB2 subsystem still catalogs the data set. If you use disk storage, be sure that the primary and secondary space quantities and block size and allocation unit are large enough so that the disk archive log data set does not attempt to extend beyond 15 volumes. That minimizes the possibility of unwanted z/OS B37 or E37 abends during the offload process. Primary space allocation is set with the PRIMARY QUANTITY field of the DSNTIPA installation panel. The primary space quantity must be less than 64K tracks because of the DFSMS Direct Access Device Space Management limit of 64K tracks on a single volume when allocating a sequential disk data set. | | | | | | | | Using SMS to manage archive log data sets: You can use DFSMS (Data Facility Storage Management Subsystem) to manage archive log data sets. When archiving to disk, DB2 uses the number of online storage volumes for the specified unit to determine a count of candidate volumes, up to a maximum of 15 volumes. If you are using SMS to direct archive log data set allocation, you should override this candidate volume count by specifying YES for the field SINGLE VOLUME on installation panel DSNTIPA. This will allow SMS to manage the allocation volume count appropriately when creating multi-volume disk archive log data sets. Because SMS requires disk data sets to be cataloged, you must make sure the field CATALOG DATA on installation panel DSNTIPA contains YES. Even if it does not, message DSNJ072E is returned and the data set is forced to be cataloged by DB2. DB2 uses the basic direct access method (BDAM) to read archive logs from disk. DFSMS does not support reading of compressed data sets using BDAM. You should not, therefore, use DFSMS hardware compression on your archive log data sets. Also, BDAM does not support extended sequential format data sets (which are used for striped or compressed data), so do not have DFSMS assign the extended sequential format to your archive log data sets. Ensure that DFSMS does not alter the LRECL, BLKSIZE, or RECFM of the archive log data sets. Altering these attributes could result in read errors when DB2 attempts to access the log data.
| | |
434
Administration Guide
| | | |
Attention: DB2 does not issue an error or a warning if you write or alter archive data to an unreadable format. For example, if DB2 successfully writes archive log data to an extended format data set, DB2 issues an error message only when you attempt to read that data.
When you issue the preceding command, DB2 truncates the current active log data sets, then runs an asynchronous offload, and updates the BSDS with a record of the offload. The RBA that is recorded in the BSDS is the beginning of the last complete log record written in the active log data set being truncated. You could use the ARCHIVE LOG command as follows to capture a point of consistency for the MSTR01 and XUSR17 databases:
-STOP DATABASE (MSTR01,XUSR17) -ARCHIVE LOG -START DATABASE (MSTR01,XUSR17)
In this simple example, the STOP command stops activity for the databases before archiving the log. Quiescing activity before offloading: Another method of ensuring that activity has stopped before the log is archived is the MODE(QUIESCE) option of ARCHIVE LOG. With this option, DB2 users are quiesced after a commit point, and the resulting point of consistency is captured in the current active log before it is offloaded. Unlike the QUIESCE utility, ARCHIVE LOG MODE(QUIESCE) does not force all changed buffers to be written to disk and does not record the log RBA in SYSIBM.SYSCOPY. It does record the log RBA in the bootstrap data set. Consider using MODE(QUIESCE) when planning for offsite recovery. It creates a system-wide point of consistency, which can minimize the number of data inconsistencies when the archive log is used with the most current image copy during recovery. In a data sharing group, ARCHIVE LOG MODE(QUIESCE) might result in a delay before activity on all members has stopped. If this delay is unacceptable to you,
Chapter 18. Managing the log and the bootstrap data set
435
consider using ARCHIVE LOG SCOPE(GROUP) instead. This command causes truncation and offload of the logs for each active member of a data sharing group. Although the resulting archive log data sets do not reflect a point of consistency, all the archive logs are made at nearly the same time and have similar LRSN values in their last log records. When you use this set of archive logs to recover the data sharing group, you can use the ENDLRSN option in the CRESTART statement of the change log inventory utility (DSNJU003) to truncate all the logs in the group to the same point in time. See DB2 Data Sharing: Planning and Administration for more information. The MODE(QUIESCE) option suspends all new update activity on DB2 up to the maximum period of time specified on the installation panel DSNTIPA, described in Part 2 of DB2 Installation Guide. If the time needed to quiesce is less than the time specified, then the command completes successfully; otherwise, the command fails when the time period expires. This time amount can be overridden when you issue the command, by using the TIME option:
-ARCHIVE LOG MODE(QUIESCE) TIME(60)
The preceding command allows for a quiesce period of up to 60 seconds before archive log processing occurs.
Important Use of this option during prime time, or when time is critical, can cause a significant disruption in DB2 availability for all jobs and users that use DB2 resources. By default, the command is processed asynchronously from the time you submit the command. (To process the command synchronously with other DB2 commands, use the WAIT(YES) option with QUIESCE; the z/OS console is then locked from DB2 command input for the entire QUIESCE period.) During the quiesce period: v Jobs and users on DB2 are allowed to go through commit processing, but are suspended if they try to update any DB2 resource after the commit. v Jobs and users that only read data can be affected, because they can be waiting for locks held by jobs or users that were suspended. v New tasks can start, but they are not allowed to update data. As shown in the following example, the DISPLAY THREAD output issues message DSNV400I to indicate that a quiesce is in effect:
DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV400I - ARCHIVE LOG QUIESCE CURRENTLY ACTIVE DSNV402I - ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID BATCH T * 20 TEPJOB SYSADM DSNTEP3 0012 DISPLAY ACTIVE REPORT COMPLETE DSN9022I - DSNVDT '-DISPLAY THREAD' NORMAL COMPLETION
TOKEN 12
When all updates are quiesced, the quiesce history record in the BSDS is updated with the date and time that the active log data sets were truncated, and with the last-written RBA in the current active log data sets. DB2 truncates the current active log data sets, switches to the next available active log data sets, and issues message DSNJ311E, stating that offload started.
436
Administration Guide
If updates cannot be quiesced before the quiesce period expires, DB2 issues message DSNJ317I, and archive log processing terminates. The current active log data sets are not truncated and not switched to the next available log data sets, and offload is not started. Whether the quiesce was successful or not, all suspended users and jobs are then resumed, and DB2 issues message DSNJ312I, stating that the quiesce is ended and update activity is resumed. If ARCHIVE LOG is issued when the current active log is the last available active log data set, the command is not processed, and DB2 issues this message:
DSNJ319I - csect-name CURRENT ACTIVE LOG DATA SET IS THE LAST AVAILABLE ACTIVE LOG DATA SET. ARCHIVE LOG PROCESSING WILL BE TERMINATED.
If ARCHIVE LOG is issued when another ARCHIVE LOG command is already in progress, the new command is not processed, and DB2 issues this message:
DSNJ318I - ARCHIVE LOG COMMAND ALREADY IN PROGRESS.
Canceling log offloads: It is possible for the offload of an active log to be suspended when something goes wrong with the offload process, such as a problem with allocation or tape mounting. If the active logs cannot be offloaded, DB2s active log data sets fill up and DB2 stops logging. To avoid this problem, use the following command to cancel (and retry) an offload:
-ARCHIVE LOG CANCEL OFFLOAD
When you enter the command, DB2 restarts the offload again, beginning with the oldest active log data set and proceeding through all active log data sets that need offloading. If the offload fails again, you must fix the problem that is causing the failure before the command can work. End of General-use Programming Interface
Chapter 18. Managing the log and the bootstrap data set
437
The CHKFREQ value that is altered by the SET LOG command persists only while DB2 is active. On restart, DB2 uses the CHKFREQ value in the DB2 subsystem parameter load module. See Chapter 2 of DB2 Command Reference for detailed information about this command. | | | | | | | | | | | | | | |
DB2 continues processing. This situation can result in a very long restart if logging continues without a system checkpoint. If DB2 continues logging beyond the defined checkpoint frequency, you should quiesce activity and terminate DB2 to minimize the restart time. You can issue the DISPLAY LOG command or run the Print Log Map utility (DSNJU004) to display the most recent checkpoint. For additional information, see Displaying log information.
438
Administration Guide
mode. Then, DB2 will stop again. In this situation, you need to restart DB2 in ACCESS(MAINT) mode, and you must reset the log RBA value. v Calculate how much space is left in the log. You can use the print log map (DSNJU004) utility to obtain the highest written RBA value in the log. Subtract this RBA from xFFFFFFFFFFFF to determine how much space is left in the log. If APAR PK27611 is applied, you need to use the RBA value of xFFFF00000000 for this calculation. You can use the output for the print log map utility to determine how many archive logs are created on an average day. This number multiplied by the RBA range of the archive log data sets (ENDRBA minus STARTRBA) provides the average number of bytes that are logged per day. Divide this value into the space remaining in the log to determine approximately how much time is left before the end of the log RBA range is reached. If there is less than one year remaining before the end of the log RBA range is reached, start planning to reset the log RBA value. If less than three months remain before the end of the log RBA range is reached, you need to take immediate action to reset the log RBA value.
439
# # #
b. If you are running an application that reads the logs with IFI (for example, a data replication product), apply the PTF for APAR PK81566 before restarting this member. c. Make a full image copy of all data. To make a full image copy, the member might need to remain quiesced for a period of time. The length of time depends on the size of the databases. After all of the data has been image copied, you no longer need the member logs for recovery. d. Cold start this member back to the RBA value of 0 (zero). This step removes all log data from the BSDS, and you can use the member again. This step requires utility DSNJU003 with the following options:
CRESTART CREATE,STARTRBA=0,ENDRBA=0
440
Administration Guide
# # #
10. If you did not reset the indexes by using DSN1COPY as specified in step 8, rebuild all indexes. Start by rebuilding the catalog and directory indexes, and then rebuild the indexes on user data. 11. Take new, full image copies of all data. If you applied the PTF for APAR PK28576, run the COPY utility with option SHRLEVEL REFERENCE to automatically reset the RBA values in all of the page sets. This step can be performed in parallel with the previous step after the catalog and directory indexes are rebuilt. 12. Stop DB2. If applicable, disable the reset RBA function in the COPY utility, and restart DB2 for normal access.
| | | | | |
Chapter 18. Managing the log and the bootstrap data set
441
Conversely, the maximum number of archive log data sets could have been exceeded, and the data from the BSDS dropped long before the data set has reached its expiration date. For additional information, refer to Deleting archive logs automatically on page 443. If you specified at installation that archive log data sets are cataloged when allocated, the BSDS points to the integrated catalog facility catalog for the information needed for later allocations. Otherwise, the BSDS entries for each volume register the volume serial number and unit information that is needed for later allocation.
442
Administration Guide
You can change the BSDS by running the DB2 batch change log inventory (DSNJU003) utility. This utility should not be run when DB2 is active. If it is run when DB2 is active, inconsistent results can be obtained. For instructions on how to use the change log inventory utility, see Part 3 of DB2 Utility Guide and Reference. You can copy an active log data set using the access method services IDCAMS REPRO statement. The copy can only be performed when DB2 is down, because DB2 allocates the active log data sets as exclusive (DISP=OLD) at DB2 startup. For more information about the REPRO statement, see DFSMS/MVS: Access Method Services for the Integrated Catalog and z/OS DFSMS Access Method Services for Catalogs.
Chapter 18. Managing the log and the bootstrap data set
443
archive log data set inventory wraps and automatically deletes the oldest entries. See Managing the bootstrap data set (BSDS) on page 441 for more details.
If you suspended DB2 activity while performing step 1, you can restart it now. Step 3: Find the minimum log RBA needed: Suppose that you have determined to keep some number of complete image copy cycles of your least-frequently-copied table space. You now need to find the log RBA of the earliest full image copy you want to keep. 1. If you have any table spaces so recently created that no full image copies of them have ever been taken, take full image copies of them. If you do not take image copies of them, and you discard the archive logs that log their creation,
444
Administration Guide
DB2 can never recover them. General-use Programming Interface The following SQL statement lists table spaces that have no full image copy:
SELECT X.DBNAME, X.NAME, X.CREATOR, X.NTABLES, X.PARTITIONS FROM SYSIBM.SYSTABLESPACE X WHERE NOT EXISTS (SELECT * FROM SYSIBM.SYSCOPY Y WHERE X.NAME = Y.TSNAME AND X.DBNAME = Y.DBNAME AND Y.ICTYPE = 'F') ORDER BY 1, 3, 2;
End of General-use Programming Interface 2. Issue the following SQL statement to find START_RBA values: General-use Programming Interface
SELECT DBNAME, TSNAME, DSNUM, ICTYPE, ICDATE, HEX(START_RBA) FROM SYSIBM.SYSCOPY ORDER BY DBNAME, TSNAME, DSNUM, ICDATE;
End of General-use Programming Interface The statement lists all databases and table spaces within them, in ascending order by date. Find the START_RBA for the earliest full image copy (ICTYPE=F) that you intend to keep. If your least-frequently-copied table space is partitioned, and you take full image copies by partition, use the earliest date for all the partitions. If you are going to discard records from SYSIBM.SYSCOPY and SYSIBM.SYSLGRNX, note the date of the earliest image copy you want to keep. Step 4: Copy catalog and directory tables: Take full image copies of the DB2 table spaces listed in Table 96 to ensure that copies of these table spaces are included in the range of log records you will keep.
Table 96. Catalog and directory tables to copy Database name DSNDB01 Table space names DBD01 SCT02 SPT01 SYSCOPY SYSDBASE SYSDBAUT SYSGPAUT SYSGROUP SYSPKAGE SYSUTILX SYSLGRNX SYSPLAN SYSSTATS SYSSTR SYSUSER SYSVIEWS
DSNDB06
Step 5: Locate and discard archive log volumes: Now that you know the minimum LOGRBA, from step 3, suppose that you want to find archive log volumes that contain only log records earlier than that. Proceed as follows: 1. Execute the print log map utility to print the contents of the BSDS. For an example of the output, see the description of print log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference.
Chapter 18. Managing the log and the bootstrap data set
445
2. Find the sections of the output titled ARCHIVE LOG COPY n DATA SETS. (If you use dual logging, there are two sections.) The columns labelled STARTRBA and ENDRBA show the range of log RBAs contained in each volume. Find the volumes (two, for dual logging) whose ranges include the minimum log RBA you found in step 3; these are the earliest volumes you need to keep. If no volumes have an appropriate range, one of these cases applies: v The minimum LOGRBA has not yet been archived, and you can discard all archive log volumes. v The list of archive log volumes in the BSDS wrapped around when the number of volumes exceeded the number allowed by the RECORDING MAX field of installation panel DSNTIPA. If the BSDS does not register an archive log volume, it can never be used for recovery. Therefore, you should consider adding information about existing volumes to the BSDS. For instructions, see Part 3 of DB2 Utility Guide and Reference. You should also consider increasing the value of MAXARCH. For information, see information about installation panel DSNTIPA in Part 2 of DB2 Installation Guide. 3. Delete any archive log data set or volume (both copies, for dual logging) whose ENDRBA value is less than the STARTRBA value of the earliest volume you want to keep. Because BSDS entries wrap around, the first few entries in the BSDS archive log section might be more recent than the entries at the bottom. Look at the combination of date and time to compare age. Do not assume you can discard all entries above the entry for the archive log containing the minimum LOGRBA. Delete the data sets. If the archives are on tape, scratch the tapes; if they are on disks, run a z/OS utility to delete each data set. Then, if you want the BSDS to list only existing archive volumes, use the change log inventory utility to delete entries for the discarded volumes; for an example, see Part 3 of DB2 Utility Guide and Reference.
446
Administration Guide
Termination
DB2 terminates normally in response to the command STOP DB2. If DB2 stops for any other reason, the termination is considered abnormal.
Normal termination
In a normal termination, DB2 stops all activity in an orderly way. You can use either STOP DB2 MODE (QUIESCE) or STOP DB2 MODE (FORCE). The effects are given in Table 97.
Table 97. Termination using QUIESCE and FORCE Thread type Active threads New threads New connections QUIESCE Run to completion Permitted Not permitted FORCE Roll back Not permitted Not permitted
You can use either command to prevent new applications from connecting to DB2. When you issue the command STOP DB2 MODE(QUIESCE), current threads can run to completion, and new threads can be allocated to an application that is running. With IMS and CICS, STOP DB2 MODE(QUIESCE) allows a current thread to run only to the end of the unit of recovery, unless either of the following conditions are true: v There are open, held cursors.
Copyright IBM Corp. 1982, 2009
447
v Special registers are not in their original state. Before DB2 can come down, all held cursors must be closed and all special registers must be in their original state, or the transaction must complete. With CICS, QUIESCE mode brings down the CICS attachment facility, so an active task will not necessarily run to completion. For example, assume that a CICS transaction opens no cursors declared WITH HOLD and modifies no special registers as follows:
EXEC SQL . . . . SYNCPOINT . . . EXEC SQL -STOP DB2 MODE(QUIESCE) issued here
The thread is allowed to run only through the first SYNCPOINT. When you issue the command STOP DB2 MODE(FORCE), no new threads are allocated, and work on existing threads is rolled back. During shutdown, use the command DISPLAY THREAD to check its progress. If shutdown is taking too long, you can issue STOP DB2 MODE (FORCE), but rolling back work can take as much or more time as the completion of QUIESCE. When stopping in either mode, the following steps occur: 1. Connections end. 2. DB2 ceases to accept commands. 3. DB2 disconnects from the IRLM. 4. The shutdown checkpoint is taken and the BSDS is updated. A data object could be left in an inconsistent state, even after a shutdown with mode QUIESCE, if it was made unavailable by the command STOP DATABASE, or if DB2 recognized a problem with the object. MODE (QUIESCE) does not wait for asynchronous tasks that are not associated with any thread to complete, before it stops DB2. This can result in data commands such as STOP DATABASE and START DATABASE having outstanding units of recovery when DB2 stops. These will become inflight units of recovery when DB2 is restarted, then be returned to their original states.
Abends
An abend can leave data in an inconsistent state for any of the following reasons: v Units of recovery might be interrupted before reaching a point of consistency. v Committed data might not be written to external media. v Uncommitted data might be written to external media.
448
Administration Guide
1: 2: 3: 4:
Log initialization Current status rebuild on page 450 Forward log recovery on page 451 Backward log recovery on page 452
In the descriptions that follow, the terms inflight, indoubt, in-commit, and in-abort refer to statuses of a unit of work that is coordinated between DB2 and another system, such as CICS, IMS, or a remote DBMS. For definitions of those terms, see Maintaining consistency after termination or failure on page 461. At the end of the fourth phase recovery, a checkpoint is taken, and committed changes are reflected in the data. Application programs that do not commit often enough cause long running units of recovery (URs). These long running URs might be inflight after a DB2 failure. Inflight URs can extend DB2 restart time. You can restart DB2 more quickly by postponing the backout of long running URs. Use installation options LIMIT BACKOUT and BACKOUT DURATION to establish what work to delay during restart. If your DB2 subsystem has the UR checkpoint count option enabled, DB2 generates console message DSNR035I and trace records for IFCID 0313 to inform you about long running URs. The UR checkpoint count option is enabled at installation time, through field UR CHECK FREQ on panel DSNTIPL. See Part 2 of DB2 Installation Guide for more information about enabling this option. # # # # # # # If your DB2 subsystem has the UR log threshold option enabled, DB2 generates console message DSNB260I when an inflight UR writes more than the installation-defined number of log records. DB2 also generates trace records for IFCID 0313 to inform you about these long running URs. You can enable the UR log threshold option at installation time, through field UR LOG WRITE CHECK on panel DSNTIPL. See Part 2 of DB2 Installation Guide for more information about enabling this option. You can restart a large object (LOB) table space like other table spaces. LOB table spaces defined with LOG NO do not log LOB data, but they log enough control information (and follow a force-at-commit policy) so that they can restart without loss of data integrity.
Without the check, the next DB2 session could conceivably update an entirely different catalog and set of table spaces. If the check fails, you presumably have the wrong parameter module. Start DB2 with the command START DB2 PARM(module-name), and name the correct module.
Chapter 19. Restarting DB2 after termination
449
2. Checks the consistency of the timestamps in the BSDS. v If both copies of the BSDS are current, DB2 tests whether the two timestamps are equal. If they are equal, processing continues with step 3. If they are not equal, DB2 issues message DSNJ120I and terminates. That can happen when the two copies of the BSDS are maintained on separate disk volumes (as recommended) and one of the volumes is restored while DB2 is stopped. DB2 detects the situation at restart. To recover, copy the BSDS with the latest timestamp to the BSDS on the restored volume. Also recover any active log data sets on the restored volume, by copying the dual copy of the active log data sets onto the restored volume. For more detailed instructions, see BSDS failure recovery on page 542. v If one copy of the BSDS was deallocated, and logging continued with a single BSDS, a problem could arise. If both copies of the BSDS are maintained on a single volume, and the volume was restored, or if both BSDS copies were restored separately, DB2 might not detect the restoration. In that case, log records not noted in the BSDS would be unknown to the system. 3. Finds in the BSDS the log RBA of the last log record written before termination. The highest RBA field (as shown in the output of the print log map utility) is updated only when the following events occur: v When DB2 is stopped normally (STOP DB2). v When active log writing is switched from one data set to another. v When DB2 has reached the end of the log output buffer. The size of this buffer is determined by the OUTPUT BUFFER field of installation panel DSNTIPL described in Part 2 of DB2 Installation Guide. 4. Scans the log forward, beginning at the log RBA of the most recent log record, up to the last control interval (CI) written before termination. 5. Prepares to continue writing log records at the next CI on the log. 6. Issues message DSNJ099I, which identifies the log RBA at which logging continues for the current DB2 session. That message signals the end of the log initialization phase of restart.
450
Administration Guide
The number of log records written between one checkpoint and the next is set when DB2 is installed; see the field CHECKPOINT FREQ of installation panel DSNTIPL, described in Part 2 of DB2 Installation Guide. You can temporarily modify the checkpoint frequency by using the command SET LOG. The value you specify persists while DB2 is active; on restart, DB2 uses the value that is specified in the CHECKPOINT FREQ field of installation panel DSNTIPL. See Chapter 2 of DB2 Command Reference for detailed information about this command. 4. Issues message DSNR004I, which summarizes the activity required at restart for outstanding units of recovery. 5. Issues message DSNR007I if any outstanding units of recovery are discovered. The message includes, for each outstanding unit of recovery, its connection type, connection ID, correlation ID, authorization ID, plan name, status, log RBA of the beginning of the unit of recovery (URID), and the date and time of its creation. During phase 2, no database changes are made, nor are any units of recovery completed. DB2 determines what processing is required by phase 3 forward log recovery before access to databases is allowed.
451
v If the log RBA in the page header is less than that of the current log record, the change has not been made; DB2 makes the change to the page in the buffer pool. 5. Writes pages to disk as the need for buffers demands it. 6. Marks the completion of each unit of recovery processed. If restart processing terminates later, those units of recovery do not reappear in status lists. 7. Stops scanning at the current end of the log. 8. Writes to disk all modified buffers not yet written. 9. Issues message DSNR005I, which summarizes the number of remaining in-commit or indoubt units of recovery. There should not be any in-commit units of recovery, because all processing for these should have completed. The number of indoubt units of recovery should be equal to the number specified in the previous DSNR004I restart message. 10. Issues message DSNR007I (described in Phase 2: Current status rebuild on page 450), which identifies any outstanding unit of recovery that still must be processed. If DB2 encounters a problem while applying log records to an object during phase 3, the affected pages are placed in the logical page list. Message DSNI001I is issued once per page set or partition, and message DSNB250E is issued once per page. Restart processing continues. DB2 issues status message DSNR031I periodically during this phase.
452
Administration Guide
5. Finally, writes to disk all modified buffers that have not yet been written. 6. Issues message DSNR006I, which summarizes the number of remaining inflight, in-abort, and postponed-abort units of recovery. The number of inflight and in-abort units of recovery should be zero; the number of postponed-abort units of recovery might not be zero. 7. Marks the completion of each completed unit of recovery in the log so that, if restart processing terminates, the unit of recovery is not processed again at the next restart. 8. If necessary, reacquires write claims for the objects on behalf of the indoubt and postponed-abort units of recovery. 9. Takes a checkpoint, after all database writes have been completed. If DB2 encounters a problem while applying a log record to an object during phase 4, the affected pages are placed in the logical page list. Message DSNI001I is issued once per page set or partition, and message DSNB250E is issued once per page. Restart processing continues. DB2 issues status message DSNR031I periodically during this phase.
Restarting automatically
If you are running DB2 in a Sysplex, you can have the automatic restart function of z/OS automatically restart DB2 or IRLM after a failure. When DB2 or IRLM stops abnormally, z/OS determines whether z/OS failed too, and where DB2 or IRLM should be restarted. It then restarts DB2 or IRLM. You must have DB2 installed with a command prefix scope of S to take advantage of automatic restart. See Part 2 of DB2 Installation Guide for instruction on specifying command scope. Using an automatic restart policy: You control how automatic restart works by using automatic restart policies. When the automatic restart function is active, the default action is to restart the subsystems when they fail. If this default action is not what you want, then you must create a policy defining the action you want taken. To create a policy, you need the element names of the DB2 and IRLM subsystems: v For a non-data-sharing DB2, the element name is 'DB2$' concatenated by the subsystem name (DB2$DB2A, for example). To specify that a DB2 subsystem is not to be restarted after a failure, include RESTART_ATTEMPTS(0) in the policy for that DB2 element. v For local mode IRLM, the element name is a concatenation of the IRLM subsystem name and the IRLM ID. For global mode IRLM, the element name is a concatenation of the IRLM data sharing group name, IRLM subsystem name, and the IRLM ID. For instructions on defining automatic restart policies, see z/OS MVS Setting Up a Sysplex.
453
454
Administration Guide
The amount of backout processing to be postponed is determined by: v The frequency of checkpoints v The BACKOUT DURATION installation option v The characteristics of the inflight and in-abort activity when the system went down Selecting a limited backout affects log processing during restart. The backward processing of the log proceeds until the oldest inflight or in-abort UR with activity against the catalog or directory is backed out, and the requested number of log records have been processed. Name the object with DEFER when installing DB2. On installation panel DSNTIPS, you can use the following options: v DEFER ALL defers restart log apply processing for all objects, including DB2 catalog and directory objects. v DEFER list_of_objects defers restart processing only for objects in the list. Alternatively, you can specify RESTART list_of_objects, which limits restart processing to the list of objects in the list. DEFER does not affect processing of the log during restart. Therefore, even if you specify DEFER ALL, DB2 still processes the full range of the log for both the forward and backward log recovery phases of restart. However, logged operations are not applied to the data set.
455
# # # # | | | | | |
In a data sharing environment, you can use the new LIGHT(YES) or LIGHT(NOINDOUBTS) parameter on the START DB2 command to quickly recover retained locks on a DB2 member. For more information, see DB2 Data Sharing: Planning and Administration. Restart considerations for identity columns: Cold starts and conditional restarts that skip forward recovery can cause additional data inconsistency within identity columns and sequence objects. After such restarts, DB2 might assign duplicate identity column values and create gaps in identity column sequences. For information about how to correct this data inconsistency, see Recovering catalog and directory tables on page 502. This section gives an overview of the available options for conditional restart. For more detail, see information about the change log inventory utility (DSNJU003) in Part 3 of DB2 Utility Guide and Reference. For information about data sharing considerations, see Chapter 5 of DB2 Data Sharing: Planning and Administration.
456
Administration Guide
1. Fix the error. 2. Restart DB2. 3. Re-issue the RECOVER POSTPONED command if automatic backout processing has not been specified.
| | | |
If the RECOVER POSTPONED processing lasts for an extended period, the output includes DSNR047I messages, as shown in Figure 52, to help you monitor backout processing. These messages show the current RBA that is being processed and the target RBA.
457
458
Administration Guide
| |
459
Time line
10 11
12
13
Participant Phase 1 Old point of consistency Begin unit of recovery Period a Data is backed out at restart Period b Data is backed out at restart Period c Data is indoubt at restart and either backed out or committed Phase 2 New point of consistency End unit of recovery Period d Data is committed at restart
Figure 53. Time line illustrating a commit that is coordinated with another subsystem
460
Administration Guide
8. The coordinator receives the notification. 9. The coordinator successfully completes its phase 1 processing. Now both subsystems agree to commit the data changes, because both have completed phase 1 and could recover from any failure. The coordinator records on its log the instant of committhe irrevocable decision of the two subsystems to make the changes. The coordinator now begins phase 2 of the processingthe actual commitment. 10. It notifies the participant to begin its phase 2. 11. The participant logs the start of phase 2. 12. Phase 2 is successfully completed, which establishes a new point of consistency for the participant. The participant then notifies the coordinator that it is finished with phase 2. 13. The coordinator finishes its phase 2 processing. The data controlled by both subsystems is now consistent and available to other applications. There are occasions when the coordinator invokes the participant when no participant resource has been altered since the completion of the last commit process. This can happen, for example, when SYNCPOINT is issued after performance of a series of SELECT statements or when end-of-task is reached immediately after SYNCPOINT has been issued. When this occurs, the participant performs both phases of the two-phase commit during the first commit phase and records that the user or job is read-only at the participant.
461
In-abort The participant or coordinator failed after a unit of recovery began to be rolled back but before the process was complete (not shown in the figure). The operational system rolls back the changes; the failed system continues to back out the changes after restart. Postponed abort If LIMIT BACKOUT installation option is set to YES or AUTO, any backout not completed during restart is postponed. The status of the incomplete URs is changed from inflight or in-abort to postponed abort.
462
Administration Guide
Important If the TCP/IP address that is associated with a DRDA server is subject to change, the domain name of each DRDA server must be defined in the CDB. This allows DB2 to recover from situations where the servers IP address changes prior to successful resynchronization.
463
464
Administration Guide
465
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
00-
17.24.36 STC00051 DSNL405I = THREAD G91E1E35.GFA7.00F962CC4611.0001=217 PLACED IN INDOUBT STATE BECAUSE OF COMMUNICATION FAILURE WITH COORDINATOR 9.30.30.53. INFORMATION RECORDED IN TRACE RECORD WITH IFCID=209 AND IFCID SEQUENCE NUMBER=00000001
After a failure, WebSphere Application Server is responsible for resolving indoubt transactions and for driving any failure recovery. To perform these functions, the server must be restarted and the recovery process initiated by an operator. You can also manually resolve indoubt transactions with the RECOVER INDOUBT command. Recommendation: Let WebSphere Application Server resolve the indoubt transactions. Manually recover indoubt transactions only as a last resort to get DB2 up and running, and to release locks. To manually resolve indoubt transactions: 1. Display indoubt threads from the resource manager console:
-DISPLAY THD(*) T(I) DETAIL
Note that in this example, only one indoubt thread exists. A transaction is identified by a transaction identifier, called an XID. The first four bytes of the XID (in this case, 7C7146CE) identify the transaction manager. Each XID is associated with a logical unit of work ID (LUWID) at the DB2 server. Note the LUWID that is associated with each transaction, for use in the recovery step. 2. Query the transaction manager to determine whether a commit or abort decision was made for each transaction. 3. Based on the decision recorded by the transaction manager, recover each indoubt thread from the resource manager console by either committing or aborting the transaction. Specify the LUWID from the DISPLAY THREAD command.
-RECOVER INDOUBT ACTION(COMMIT) LUWID(217) or: -RECOVER INDOUBT ACTION(ABORT) LUWID(217)
466
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
00-
17.30.21 =RECOVER INDOUBT ACTION(COMMIT) LUWID(217) 17.30.22 STC00051 DSNV414I = THREAD LUWID=G91E1E35.GFA7.00F962CC4611.0001=217 COMMIT SCHEDULED 17.30.22 STC00051 DSN9022I = DSNVRI '-RECOVER INDOUBT' NORMAL COMPLETION
Notice that the transaction now appears as a heuristically committed transaction. 5. If the transaction manager does not recover the indoubt transactions in a timely manner, reset the transactions from the resource manager console to purge the indoubt thread information. Specify the IP address and port from the DISPLAY THREAD command.
-RESET INDOUBT IPADDR(9.30.30.53:4007)FORCE
467
Important In a manual recovery situation, you must determine whether the coordinator decided to commit or abort and ensure that the same decision is made at the participant. In the recovery process, DB2 attempts to automatically resynchronize with its participants. If you decide incorrectly what the coordinators recovery action is, data is inconsistent at the coordinator and participant. If you choose to resolve units of recovery manually, you must: v Commit changes made by logical units of work that were committed by the other system v Roll back changes made by logical units of work that were rolled back by the other system
468
Administration Guide
tells you which systems still need indoubt resolution and provides the LUWIDs you need to recover indoubt threads. A thread that has completed phase 1 of commit and still has a connection with its coordinator is in the prepared state and is displayed as part of the display thread active report. If a prepared thread loses its connection with its coordinator, it enters the indoubt state and terminates its connections to any participants at that time. Any threads in the prepared or indoubt state when DB2 terminates are indoubt after DB2 restart. However, if the participant system is waiting for a commit or roll back decision from the coordinator, and the connection is still active, DB2 considers the thread active. If a thread is indoubt at a participant, you can determine whether a commit or abort decision was made at the coordinator by issuing the DISPLAY THREAD command at the coordinator as described previously. If an indoubt thread appears at one system and does not appear at the other system, then the latter system backed out the thread, and the first system must therefore do the same. See Monitoring threads on page 383 for examples of output from the DISPLAY THREAD command.
Detailed scenarios describing indoubt thread resolution can be found in Resolving indoubt threads on page 588.
469
You can also use a LUNAME or IP address with the RESET INDOUBT command. A new keyword (IPADDR) can be used in place of LUNAME or LUW keywords, when the partner uses TCP/IP instead of SNA. The partners resync port number is required when using the IP address. The DISPLAY THREAD output lists the resync port number. This allows you to specify a location, instead of a particular thread. You can reset all the threads associated with that location with the (*) option.
470
Administration Guide
AS1
DB2C Server
IMS/ CICS
DB2A
DB2B
DB2D Server
AS2
DB2E Server
If the connection goes down between DB2A and the coordinating IMS system, the connection becomes an indoubt thread. However, DB2As connections to the other systems are still waiting and are not considered indoubt. Wait for automatic recovery to occur to resolve the indoubt thread. When the thread is recovered, the unit of work commits or rolls back and this action is propagated to the other systems involved in the unit of work.
471
Commit 3 4 Forget 5
Participant 2
Forget
Forget
Figure 55. Illustration of multi-site update. C is the coordinator; P1 and P2 are the participants.
The following process describes each action that Figure 55 illustrates. Phase 1: 1. When an application commits a logical unit of work, it signals the DB2 coordinator. The coordinator starts the commit process by sending messages to the participants to determine whether they can commit. 2. A participant (Participant 1) that is willing to let the logical unit of work be committed, and which has updated recoverable resources, writes a log record. It then sends a request commit message to the coordinator and waits for the final decision (commit or roll back) from the coordinator. The logical unit of work at the participant is now in the prepared state. If a participant (Participant 2) has not updated recoverable resources, it sends a forget message to the coordinator, releases its locks and forgets about the logical unit of work. A read-only participant writes no log records. As far as this participant is concerned, it does not matter whether the logical unit of work ultimately gets rolled back or committed. If a participant wants to have the logical unit of work rolled back, it writes a log record and sends a message to the coordinator. Because a message to roll back acts like a veto, the participant in this case knows that the logical unit of work will be rolled back by the coordinator. The participant does not need any more information from the coordinator and therefore rolls back the logical unit of work, releases its locks, and forgets about the logical unit of work. (This case is not illustrated in the figure.) Phase 2: 3. After the coordinator receives request commit or forget messages from all its participants, it starts the second phase of the commit process. If at least one of the responses is request commit, the coordinator writes a log record and sends committed messages to all the participants who responded to the prepare
472
Administration Guide
message with request commit. If neither the participants nor the coordinator have updated any recoverable resources, there is no second phase and no log records are written by the coordinator. 4. Each participant, after receiving a committed message, writes a log record, sends a response to the coordinator, and then commits the logical unit of work. If any participant responds with a roll back message, the coordinator writes a log record and sends a roll back message to all participants. Each participant, after receiving a roll back message writes a log record, sends an acknowledgment to the coordinator, and then rolls back the logical unit of work. (This case is not illustrated in the figure.) 5. The coordinator, after receiving the responses from all the participants that were sent a message in the second phase, writes an end record and forgets the logical unit of work. Important: If you try to resolve any indoubt threads manually, you need to know whether the participants committed or rolled back their units of work. With this information, you can make an appropriate decision regarding processing at your site.
473
474
Administration Guide
475
v Preparing to recover the entire DB2 subsystem to a prior point-in-time on page 486 v Preparing for disaster recovery on page 487 v Ensuring more effective recovery from inconsistency on page 490 v Running the RECOVER utility in parallel on page 492 v Reading the log without RECOVER on page 493
476
Administration Guide
preventing simultaneous use of DB2 on the two systems. The bootstrap data set (BSDS) must be included as a protected resource, and the primary and alternate XRF processors must be included in the GRS ring.
477
before the failure occurred. You can do that because you have regularly followed a cycle of preparations for recovery. The most recent cycle began on Monday morning. Monday morning: You start the DBASE1 database and make a full image copy of TSPACE1 and all indexes immediately. That gives you a starting point from which to recover. Use the COPY utility with the SHRLEVEL CHANGE option to improve availability. See Part 2 of DB2 Utility Guide and Reference for more information about the COPY utility. Tuesday morning: You run COPY again. This time you make an incremental image copy to record only the changes made since the last full image copy that you took on Monday. You also make a full index copy. TSPACE1 can be accessed and updated while the image copy is being made. For maximum efficiency, however, you schedule the image copies when online use is minimal. Wednesday morning: You make another incremental image copy, and then create a full image copy by using the MERGECOPY utility to merge the incremental image copy with the full image copy. Thursday and Friday mornings: You make another incremental image copy and a full index copy each morning. Friday afternoon: An unsuccessful write operation occurs and you need to recover the table space. Run the RECOVER utility, as described in Part 2 of DB2 Utility Guide and Reference. The utility restores the table space from the full image copy made by MERGECOPY on Wednesday and the incremental image copies made on Thursday and Friday, and includes all changes made to the recovery log since Friday morning. Later Friday afternoon: The RECOVER utility issues a message announcing that it has successfully recovered TSPACE1 to current point-in-time. This imaginary scenario is somewhat simplistic. You might not have taken daily incremental image copies on just the table space that failed. You might not ordinarily recover an entire table space. However, it illustrates this important point: with proper preparation, recovery from a failure is greatly simplified.
478
Administration Guide
v Apply all changes to the page set that are registered in the log, beginning at the log RBA of the most recent image copy. (In Figure 56, X'C0040', that address is also stored in the SYSIBM.SYSCOPY catalog table.) If the log has been damaged or discarded, or if data has been changed erroneously and then committed, you can recover to a particular point-in-time by limiting the range of log records to be applied by the RECOVER utility.
Figure 56. Overview of DB2 recovery. The figure shows one complete cycle of image copies; the SYSIBM.SYSCOPY catalog table can record many complete cycles.
479
TABLESPACE, REORG TABLESPACE, REBUILD INDEX, or REORG INDEX utilities. Only structural modifications of the index are logged when these utilities are run, so there is not enough information in the log to recover the index. Use the CHANGELIMIT option of the COPY utility to let DB2 determine when an image copy should be performed on a table space and whether a full or incremental copy should be taken. Use the CHANGELIMIT and REPORTONLY options together to let DB2 recommend what types of image copies to make. When you specify both CHANGELIMIT and REPORTONLY, DB2 makes no image copies. The CHANGELIMIT option does not apply to indexes. In determining how many complete copy and log cycles to keep, you are guarding against damage to a volume containing an important image copy or a log data set. A retention period of at least two full cycles is recommended. For further security, keep records for three or more copy cycles.
Quota information for Moderate each sales person Customer descriptions Parts inventory Parts suppliers Parts descriptions Commission rates Moderate Moderate Light Light Light
If you do a full recovery, you do not need to recover the indexes unless they are damaged. If you recover to a prior point-in-time, you do need to recover the indexes. See Considerations for recovering indexes on page 477 for information about indexes.
480
Administration Guide
481
can cause mistakes. The best way to minimize mistakes is to practice your recovery scenario until you know it well. The best time to practice is outside of regular working hours, when fewer key applications are running. Minimize preventable outages: One aspect of your backup and recovery plan should be eliminating the need to recover whenever possible. One way to do that is to prevent outages caused by errors in DB2. Be sure to check available maintenance often and apply fixes for problems that are likely to cause outages. Determine the required backup frequency: Use your recovery criteria to decide how often to make copies of your databases. For example, if the maximum acceptable recovery time after you lose a volume of data is two hours, your volumes typically hold about 4 GB of data, and you can read about 2 GB of data per hour, then you should make copies after every 4 GB of data written. You can use the COPY option SHRLEVEL CHANGE or DFSMSdss concurrent copy to make copies while transactions and batch jobs are running. You should also make a copy after running jobs that make large numbers of changes. In addition to copying your table spaces, you should also consider copying your indexes. You can make additional backup image copies from a primary image copy by using the COPYTOCOPY utility. This capability is especially useful when the backup image is copied to a remote site that is to be used as a disaster recovery site for the local site. Applications can run concurrently with the COPYTOCOPY utility. Only utilities that write to the SYSCOPY catalog table cannot run concurrently with COPYTOCOPY. Minimize the elapsed time of RECOVER jobs: The RECOVER utility supports the recovery of a list of objects in parallel. For those objects in the list that can be processed independently, multiple subtasks are created to restore the image copies for the objects. The parallel function can be used for either disk or tape. Minimize the elapsed time for copy jobs: You can use the COPY utility to make image copies of a list of objects in parallel. Image copies can be made to either disk or tape. Determine the right characteristics for your logs: v If you have enough disk space, use more and larger active logs. Recovery from active logs is quicker than from archive logs. v To speed recovery from archive logs, consider archiving to disk. v If you archive to tape, be sure you have enough tape drives that DB2 does not have to wait for an available drive on which to mount an archive tape during recovery. v Make the buffer pools and the log buffers large enough to be efficient. Minimize DB2 restart time: Many recovery processes involve restart of DB2. You need to minimize the time that DB2 shutdown and startup take. For non-data-sharing systems, you can limit the backout activity during DB2 system restart. You can postpone the backout of long running URs until after the DB2 system is operational. See Deferring restart processing on page 454 for an explanation of how to use the installation options LIMIT BACKOUT and BACKOUT DURATION to determine what backout work will be delayed during restart processing. These are some major factors that influence the speed of DB2 shutdown:
| |
482
Administration Guide
v Number of open DB2 data sets During shutdown, DB2 must close and deallocate all data sets it uses if the fast shutdown feature has been disabled. The default is to use the fast shutdown feature. Contact your IBM software support representative for information about enabling and disabling the fast shutdown feature. The maximum number of concurrently open data sets is determined by the DB2 subsystem parameter DSMAX. Closing and deallocation of data sets generally takes .1 to .3 seconds per data set. See Part 5 (Volume 2) of DB2 Administration Guide for information about how to choose an appropriate value for DSMAX. Be aware that z/OS global resource serialization (GRS) can increase the time to close DB2 data sets. If your DB2 data sets are not shared among more than one z/OS system, set the GRS RESMIL parameter value to OFF or place the DB2 data sets in the SYSTEMS exclusion RNL. See z/OS MVS Planning: Global Resource Serialization for details. v Active threads DB2 cannot shut down until all threads have terminated. Issue the DB2 command DISPLAY THREAD to determine if there are any active threads while DB2 is shutting down. If possible, cancel those threads. v Processing of SMF data At DB2 shutdown, z/OS does SMF processing for all DB2 data sets opened since DB2 startup. You can reduce the time that this processing takes by setting the z/OS parameter DDCONS(NO). These major factors influence the speed of DB2 startup: v DB2 checkpoint interval The DB2 checkpoint interval creates a number of log records that DB2 writes between successive checkpoints. This value is controlled by the DB2 subsystem parameter CHKFREQ. The default of 50000 results in the fastest DB2 startup time in most cases. You can use the LOGLOAD or CHKTIME option of the SET LOG command to modify the CHKFREQ value dynamically without recycling DB2. The value you specify depends on your restart requirements. See Dynamically changing the checkpoint frequency on page 437 for examples of how you might use these command options. See Chapter 2 of DB2 Command Reference for detailed information about the SET LOG command. v Long running units of work DB2 rolls back uncommitted work during startup. The amount of time for this activity is roughly double the time that the unit of work was running before DB2 shut down. For example, if a unit of work runs for two hours before a DB2 abend, it will take at least four hours to restart DB2. Decide how long you can afford for startup, and avoid units of work that run for more than half that long. You can use accounting traces to detect long running units of work. For tasks that modify tables, divide the elapsed time by the number of commit operations to get the average time between commit operations. Add commit operations to applications for which this time is unacceptable. Recommendation: To detect long running units of recovery, enable the UR CHECK FREQ option of installation panel DSNTIPL. If long running units of recovery are unavoidable, consider enabling the LIMIT BACKOUT option on installation panel DSNTIPL. v Size of active logs If you archive to tape, you can avoid unnecessary startup delays by making each active log big enough to hold the log records for a typical unit of work. This
Chapter 21. Backing up and recovering databases
483
lessens the probability that DB2 will have to wait for tape mounts during startup. See Part 5 (Volume 2) of DB2 Administration Guide for more information about choosing the size of the active logs.
484
Administration Guide
v Log ranges of the table space from the SYSIBM.SYSLGRNX directory v Archive log data sets from the bootstrap data set v The names of all members of a table space set You can also use REPORT to obtain recovery information about the catalog and directory. Details about the REPORT utility and examples showing the results obtained when using the RECOVERY option are contained in Part 2 of DB2 Utility Guide and Reference.
485
cannot rely on the copy only. Recovery requires reading the log up to a point of consistency, so you want to establish such a point as soon as possible.
QUIESCE writes changed pages from the page set to disk. The catalog table SYSIBM.SYSCOPY records the current RBA and the timestamp of the quiesce point. At that point, neither page set contains any uncommitted data. A row with ICTYPE Q is inserted into SYSCOPY for each table space quiesced. Page sets DSNDB06.SYSCOPY, DSNDB01.DBD01, and DSNDB01.SYSUTILX, are an exception: their information is written to the log. Indexes are quiesced automatically when you specify WRITE(YES) on the QUIESCE statement. A SYSIBM.SYSCOPY row with ICTYPE Q is inserted for indexes that have the COPY YES attribute. QUIESCE allows concurrency with many other utilities; however, it does not allow concurrent updates until it has quiesced all specified page sets. Depending upon the amount of activity, that can take considerable time. Try to run QUIESCE when system activity is low. Also, consider using the MODE(QUIESCE) option of the ARCHIVE LOG command when planning for offsite recovery. It creates a system-wide point of consistency, which can minimize the number of data inconsistencies when the archive log is used with the most current image copy during recovery. See Archiving the log on page 435 for more information about using the MODE(QUIESCE) option of the ARCHIVE LOG command.
486
Administration Guide
3. Stop DB2 with the command STOP DB2 MODE (QUIESCE). DB2 does not actually stop until all currently executing programs have completed processing. Be sure to use MODE (QUIESCE); otherwise, I/O errors can occur when the steps listed in Performing fall back to a prior shutdown point on page 617 are used to restart DB2. 4. When DB2 has stopped, use access method services EXPORT to copy all BSDS and active log data sets. If you have dual BSDSs or dual active log data sets, export both copies of the BSDS and the logs. 5. Save all the data that has been copied or dumped, and protect it and the archive log data sets from damage.
# # # # # # #
Data sharing In a data sharing environment, you can use the LIGHT(YES) or LIGHT(NOINDOUBTS) parameter to quickly bring up a DB2 member to recover retained locks. Restart light is not recommended for a restart in place and is intended only for a cross-system restart for a system that does not have adequate capacity to sustain the DB2 IRLM pair. Restart light can be used for normal restart and recovery. See Chapter 5 of DB2 Data Sharing: Planning and Administration for more details. For data sharing, you need to consider whether you want the DB2 group to use light mode at the recovery site. A light start might be desirable if you have configured only minimal resources at the remote site. If this is the case, you might run a subset of the members permanently at the remote site. The other members are restarted and then directly shutdown. The procedure for a light start at the remote site is: 1. Start the members that run permanently with the LIGHT(NO) option. This is the default. 2. Start other members in light mode. The members started in light mode use a smaller storage footprint. After their restart processing completes, they automatically shutdown. If ARM is in use, ARM does not automatically restart the members in light mode again. 3. Members started with LIGHT(NO) remain active and are available to run new work. Several levels of preparation for disaster recovery exist: v Prepare the recovery site to recover to a fixed point-in-time. For example, you could copy everything weekly with a DFSMSdss volume dump (logical) and manually send it to the recovery site, then restore the data there.
Chapter 21. Backing up and recovering databases
# # # # # # # #
487
v For recovery through the last archive, copy and send the following objects to the recovery site as you produce them: Image copies of all catalog, directory, and user page sets Archive logs Integrated catalog facility catalog EXPORT and list BSDS lists With this approach you can determine how often you want to make copies of essential recovery elements and send them to the recovery site. After you establish your copy procedure and have it operating, you must prepare to recover your data at the recovery site. See Remote site recovery from a disaster at the local site on page 563 for step-by-step instructions on the disaster recovery process. v Use the log capture exit to capture log data in real time and send it to the recovery site. See Reading log records with the log capture exit routine on page 1138 and Log capture routines on page 1105.
Figure 57. Preparing for disaster recovery. The information you need to recover is contained in the copies of data (including the DB2 catalog and directory) and the archive log data sets.
488
Administration Guide
option when you run COPY to make additional copies for disaster recovery. You can use those copies on any DB2 subsystem that you have installed using the RECOVERYSITE option.5 For information about making multiple image copies, see COPY and COPYTOCOPY in Part 2 of DB2 Utility Guide and Reference. Do not produce the copies by invoking COPY twice. 2. Catalog the image copies if you want to track them. 3. Create a QMF report or use SPUFI to issue a SELECT statement to list the contents of SYSCOPY. 4. Send the image copies and report to the recovery site. 5. Record this activity at the recovery site when the image copies and the report are received. All table spaces should have valid image copies. Indexes can have valid image copies or they can be rebuilt from the table spaces. v Archive logs 1. Make copies of the archive logs for the recovery site. a. Use the ARCHIVE LOG command to archive all current DB2 active log data sets. For more ARCHIVE LOG command information see Archiving the log on page 435. Recommendation: When using dual logging, keep both copies of the archive log at the local site in case the first copy becomes unreadable. If the first copy is unreadable, DB2 requests the second copy. If the second copy is not available, the read fails. However, if you take precautions when using dual logging, such as making another copy of the first archive log, you can send the second copy to the recovery site. If recovery is necessary at the recovery site, specify YES for the READ COPY2 ARCHIVE field on installation panel DSNTIPO. Using this option causes DB2 to request the second archive log first. b. Catalog the archive logs if you want to track them. You will probably need some way to track the volume serial numbers and data set names. One way of doing this is to catalog the archive logs to create a record of the necessary information. You could also create your own tracking method and do it manually. 2. Use the print log map utility to create a BSDS report. 3. Send the archive copy, the BSDS report, and any additional information about the archive log to the recovery site. 4. Record this activity at the recovery site when the archive copy and the report are received. v Consistent system time 1. Choose a consistent system time for all DB2 subsystems. DB2 utilities, including the RECOVER utility, require system clocks that are consistent across all members of a DB2 data-sharing group. To prevent inconsistencies during data recovery, ensure that all system times are consistent with the system time at the failing site. 2. Ensure that the system clock remains consistent.
5. You can also use these copies on a subsystem installed with the LOCALSITE option if you run RECOVER with the RECOVERYSITE option. Or you can use copies prepared for the local site on a recovery site, if you run RECOVER with the option LOCALSITE. Chapter 21. Backing up and recovering databases
489
Once you establish a consistent system time, do not alter the system clock. Any manual change in the system time (forward or backward) can affect how DB2 writes and processes image copies and log records. v Integrated catalog facility catalog backups 1. Back up all DB2-related integrated catalog facility catalogs with the VSAM EXPORT command on a daily basis. Synchronize the backups with the cataloging of image copies and archives. Use the VSAM LISTCAT command to create a list of the DB2 entries. Send the EXPORT backup and list to the recovery site. Record this activity at the recovery site when the EXPORT backup and list are received. v DB2 libraries 1. Back up DB2 libraries to tape when they are changed. Include the SMP/E, load, distribution, and target libraries, as well as the most recent user applications and DBRMs. 2. Back up the DSNTIJUZ job that builds the ZPARM and DECP modules. 2. 3. 4. 5. 3. Back up the data set allocations for the BSDS, logs, directory, and catalogs. 4. Document your backups. 5. Send backups and corresponding documentation to the recovery site. 6. Record activity at the recovery site when the library backup and documentation are received. For disaster recovery to be successful, all copies and reports must be updated and sent to the recovery site regularly. Data will be up to date through the last archive sent. For disaster recovery start up procedures, see Remote site recovery from a disaster at the local site on page 563.
Actions to take
To aid in successful recovery of inconsistent data: v During the installation of, or migration to, Version 8, make a full image copy of the DB2 directory and catalog using installation job DSNTIJIC. See Part 2 of DB2 Installation Guide for DSNTIJIC information. If you did not do this during installation or migration, use the COPY utility, described in Part 2 of DB2 Utility Guide and Reference, to make a full image copy of the DB2 catalog and directory. If you do not do this and you subsequently have a problem with inconsistent data in the DB2 catalog or directory, you will not be able to use the RECOVER utility to resolve the problem. v Periodically make an image copy of the catalog, directory, and user databases. This minimizes the time the RECOVER utility requires to perform recovery. In addition, this increases the probability that the necessary archive log data sets will still be available. You should keep two copies of each level of image copy data set. This reduces the risk involved if one image copy data set is lost or damaged. See Part 2 of DB2 Utility Guide and Referencefor more information about using the COPY utility.
490
Administration Guide
v Use dual logging for your active log, archive log, and bootstrap data sets. This increases the probability that you can recover from unexpected problems. It is especially useful in resolving data inconsistency problems. See Establishing the logging environment on page 429 for related dual logging information. v Before using RECOVER, rename your data sets. If the image copy or log data sets are damaged, you can compound your problem by using the RECOVER utility. Therefore, before using RECOVER, rename your data sets by using one of the following methods: rename the data sets that contain the page sets you want to recover, or copy your data sets using DSN1COPY, or for user-defined data sets, use access method services to define a new data set with the original name. The RECOVER utility applies log records to the new data set with the old name. Then, if a problem occurs during RECOVER utility processing, you will have a copy (under a different name) of the data set you want to recover. v Keep back-level image copy data sets. If you make an image copy of a page set containing inconsistent data, the RECOVER utility cannot resolve the data inconsistency problem. However, you can use RECOVER TOCOPY or TOLOGPOINT to resolve the inconsistency if you have an older image copy of the page set that was taken before the problem occurred. You can also resolve the inconsistency problem by using a point-in-time recovery to avoid using the most recent image copy. v Maintain consistency between related objects. A referential structure is a set of tables including indexes and their relationships. It must include at least one table, and for every table in the set, include all of the relationships in which the table participates, as well as all the tables to which it is related. To help maintain referential consistency, keep the number of table spaces in a table space set to a minimum, and avoid tables of different referential structures in the same table space. The TABLESPACESET option of the REPORT utility reports all members of a table space set defined by referential constraints. A referential structure must be kept consistent with respect to point-in-time recovery. Use the QUIESCE utility to establish a point of consistency for a table space set, to which the table space set can later be recovered without introducing referential constraint violations. A base table space must be kept consistent with its associated LOB table spaces with respect to point-in-time recovery. Use the TABLESPACESET option of the REPORT utility to find all LOB table spaces associated with a base table space. Use the QUIESCE utility to establish a point of consistency, for a table space set, to which the table space set can later be recovered.
Actions to avoid
v Do not discard archive logs you might need. The RECOVER utility might need an archive log to recover from an inconsistent data problem. If you have discarded it, you cannot use the RECOVER utility and must resolve the problem manually. For information about determining when you can discard archive logs, see Discarding archive log records on page 443. v Do not make an image copy of a page set that contains inconsistent data. If you use the COPY utility to make an image copy of a page set containing inconsistent data, the RECOVER utility cannot recover a problem involving that page set unless you have an older image copy of that page set taken before the problem occurred. You can run DSN1COPY with the CHECK option to determine whether intra-page data inconsistency problems exist on page sets
Chapter 21. Backing up and recovering databases
491
before making image copies of them. If you are taking a copy of a catalog or directory page set, you can run DSN1CHKR which verifies the integrity of the links, and the CHECK DATA utility which checks the DB2 catalog (DSNDB06). For information, see DB2 Utility Guide and Reference. v Do not use the TERM UTILITY command on utility jobs you want to restart. If an error occurs while a utility is running, the data on which the utility was operating might continue to be written beyond the commit point. If the utility is restarted later, processing resumes at the commit point or at the beginning of the current phase, depending on the restart parameter that was specified. If the utility stops while it has exclusive access to data, other applications cannot access that data. In this case, you might want to issue the TERM UTILITY command to terminate the utility and make the data available to other applications. However, use the TERM UTILITY command only if you cannot restart or do not need to restart the utility job. When you issue the TERM UTILITY command, two different situations can occur: If the utility is active, it terminates at its next commit point. If the utility is stopped, it terminates immediately. If you use the TERM UTILITY command to terminate a utility, the objects on which the utility was operating are left in an indeterminate state. Often, the same utility job cannot be rerun. The specific considerations vary for each utility, depending on the phase in process when you issue the command. For details, see Part 2 of DB2 Utility Guide and Reference.
492
Administration Guide
dbname.tsname DSNUM 2 etc. (refer also to message DSNU512I). You can simplify this specification by using LISTDEF with PARTLEVEL if all parts must be recovered. v Schedule concurrent RECOVER jobs that process different partitions. The degree of parallelism in this case is limited by contention for both the image copies and the required log data. Log data is read by concurrent jobs as follows: Active logs and archive logs are read entirely in parallel. A data set controlled by DFSMShsm is first recalled. It then resides on disk and is read in parallel.
493
DB2 makes a full image copy if the percent of changed pages is greater than or equal to the high CHANGELIMIT value. The CHANGELIMIT option does not apply to indexes. If you want DB2 to recommend what image copies should be made but not make the image copies, use the CHANGELIMIT and REPORTONLY options of the COPY utility. If you specify the parameter DSNUM ALL with CHANGELIMIT and REPORTONLY, DB2 reports information for each partition of a partitioned table space or each piece of a nonpartitioned table space. You can add conditional code to your jobs so that an incremental or full image copy, or some other step is performed depending on how much the table space has changed. When you use the COPY utility with the CHANGELIMIT option to display image copy statistics, the COPY utility uses the following return codes to indicate the degree that a table space or list of table spaces has changed: Code 1 2 Meaning Successful and no CHANGELIMIT value is met. No image copy is recommended or taken. Successful and the percent of changed pages is greater than the low CHANGELIMIT value and less than the high CHANGELIMIT value. An incremental image copy is recommended or taken. Successful and the percent of changed pages is greater than or equal to the high CHANGELIMIT value. A full image copy is recommended or taken.
When you use generation data groups (GDGs) and need to make an incremental image copy, there are new steps you can take to prevent an empty image copy output data set from being created if no pages have been changed. You can perform one of the following: 1. Make a copy of your image copy step, but add the REPORTONLY and CHANGELIMIT options to the new COPY utility statement. The REPORTONLY keyword specifies that you only want image copy information displayed. Change the SYSCOPY DD card to DD DUMMY so that no output data set is allocated. Run this step to visually determine the change status of your table space. 2. Add step 1 before your existing image copy step, and add a JCL conditional statement to examine the return code and execute the image copy step if the table space changes meet either of the CHANGELIMIT values. You can also use the COPY utility with the CHANGELIMIT option to determine whether any space map pages are broken, or to identify any other problems that might prevent an image copy from being taken, such as the object being in recover pending status. You need to correct these problems before you run the image copy job. You can also make a full image copy when you run the LOAD or REORG utility. This technique is better than running the COPY utility after the LOAD or REORG utility because it decreases the time that your table spaces are unavailable. However, only the COPY utility makes image copies of indexes.
494
Administration Guide
Related information: For guidance in using COPY and MERGECOPY and making image copies during LOAD and REORG, see Part 2 of DB2 Utility Guide and Reference.
495
v If you made backup copies using a method outside of DB2s control, such as with DSN1COPY or the DFSMSdss concurrent copy function, use the same method to restore the objects to a prior point-in-time. Then, if you wish to restore the objects to currency, run the RECOVER utility with the LOGONLY option. The RECOVER utility performs these actions: v Restores the most current full image copy v Applies changes recorded in later incremental image copies of table spaces, if applicable, and applies later changes from the archive or active log RECOVER can act on: v A table space, or list of table spaces v An index, or list of indexes v A specific partition or data set within a table space v A specific partition within an index space v A mixed list of table spaces, indexes, partitions, and data sets v A single page v A page range within a table space that DB2 finds in error v The catalog and directory Typically, RECOVER restores an object to its current state by applying all image copies and log records. It can also restore to a prior state, which is one of the following points in time: v A specified point on the log (use the TOLOGPOINT keyword) v A particular image copy (use the TOCOPY, TOLASTCOPY, or TOLASTFULLCOPY keywords) The RECOVER utility can use image copies for the local site or the recovery site, regardless of where you invoke the utility. The RECOVER utility locates all full and incremental image copies. The RECOVER utility first attempts to use the primary image copy data set. If an error is encountered (allocation, open, or I/O), RECOVER attempts to use the backup image copy. If DB2 encounters an error in the backup image copy or no backup image copy exists, RECOVER falls back to an earlier full copy and attempts to apply incremental copies and log records. If an earlier full copy in not available, RECOVER attempts to apply log records only. For guidance in using RECOVER and REBUILD INDEX, see Part 2 of DB2 Utility Guide and Reference. Not every recovery operation requires RECOVER; see also Recovering error ranges for a work file table space on page 497 Recovering the work file database on page 497 Recovering data to a prior point of consistency on page 499 Important: Be very careful when using disk dump and restore for recovering a data set. Disk dump and restore can make one data set inconsistent with DB2 subsystem tables in some other data set. Use disk dump and restore only to restore the entire subsystem to a previous point of consistency, and prepare that point as described in the alternative in step 2 under Preparing to recover to a prior point of consistency on page 485.
496
Administration Guide
2. Use the DELETE and DEFINE functions of access method services to redefine a user work file on a different volume and reconnect it to DB2. 3. Issue the following DB2 command:
-START DATABASE (DSNDB07)
3. Enter the following SQL statement to drop the table space with the problem:
DROP TABLESPACE DSNDB07.tsname:
4. Re-create the table space. You can use the same storage group, because the problem volume has been removed, or you can use an alternate.
CREATE TABLESPACE tsname IN DSNDB07 USING STOGROUP stogroup-name;
497
Also, DB2 always resets any error ranges when the work file table space is initialized, regardless of whether the disk error has really been corrected. Work file table spaces are initialized when: v The work file table space is stopped and then started v The work file database is stopped and then started, and the work file table space was not previously stopped v DB2 is started and the work file table space was not previously stopped If the error range is reset while the disk error still exists, and if DB2 has an I/O error when using the work file table space again, then DB2 sets the error range again.
498
Administration Guide
DBID Database identifier OBID Data object identifier PSID Table space identifier
499
If you use a method outside of DB2s control, such as DSN1COPY to restore a table space to a prior point-in-time, run the REPAIR utility with the LEVELID option to force DB2 to accept the down-level data. Then run the REORG utility on the table space to correct the DBD. Recovering LOB table spaces: When you recover tables with LOB columns, recover the entire set of objects, including the base table space, the LOB table spaces, and index spaces for the auxiliary indexes. If you use the RECOVER utility to recover a LOB table space to a prior point of consistency, RECOVER might place the table space in a pending state. For more details about the particular pending states that the RECOVER utility sets, see Using RECOVER to restore data to a previous point-in-time on page 504. Recovering table space sets: If you restore a page set to a prior state, restore all related tables and indexes to the same point to avoid inconsistencies. The table spaces that contain referentially related tables are called a table space set. Similarly, a LOB table space and its associated base table space are also part of a table space set. For example, in the DB2 sample application, a column in the EMPLOYEE table identifies the department to which each employee belongs. The departments are described by records in the DEPARTMENT table, which is in a different table space. If only that table space is restored to a prior state, a row in the unrestored EMPLOYEE table might then identify a department that does not exist in the restored DEPARTMENT table. You can use the REPORT TABLESPACESET utility to determine all the page sets that belong to a single table space set and then restore those page sets that are related. However, if page sets are logically related outside of DB2 in application programs, you are responsible for identifying all the page sets on your own. To determine a valid quiesce point for the table space set, use the procedure for determining a RECOVER TOLOGPOINT value. See RECOVER in Part 2 of DB2 Utility Guide and Reference for more information.
500
Administration Guide
Tip: To determine the last value in an identity column, issue the MAX column function for ascending sequences of identity column values, or the MIN column function for descending sequences of identity column values. This method works only if the identity column does not use CYCLE. v If you recover to a point-in-time at which the identity column was not yet defined, that identity column remains part of the table. The resulting identity column no longer contains values. To regenerate missing identity column values, perform the following steps: 1. Choose a starting value for the identity column with the following ALTER TABLE statement:
ALTER TABLE table-name ALTER COLUMN identity-column-name RESTART WITH starting-identity-value
2. Run the REORG utility to regenerate lost sequence values. If you do not choose a starting value, the REORG utility generates a sequence of identity column values that starts with the next value that DB2 would have assigned before the recovery. A table space that contains an identity column is set to REORG-pending (REORP) status if you recover the table space to a point-in-time before the identity column was defined. To access the recovered table, you need to remove this status. See Ensuring consistency on page 506 for information about how to remove REORP status from a table space.
Recovering indexes
When you recover indexes to a prior point of consistency, the following general rules apply: v v If an image copies exists for an indexes, use the RECOVER utility. If indexes do not have image copies, you must use REBUILD INDEX to re-create the indexes after the data has been recovered.
More specifically, you must consider how indexes on altered tables and indexes on tables in partitioned table spaces can restrict recovery. | | | | | | | | | | | | | Recovering indexes on altered tables: You cannot use the RECOVER utility to recover an index to a point-in-time that existed before you issued any of the following ALTER statements on that index. These statements place the index in REBUILD-pending (RBDP) status: v ALTER INDEX PADDED v ALTER INDEX NOT PADDED v ALTER TABLE SET DATA TYPE on an indexed column for numeric data type changes v ALTER TABLE ADD COLUMN and ALTER INDEX ADD COLUMN that are not issued in the same commit scope When you recover a table space to prior point-in-time and the table space uses indexes that were set to RBDP at any time after the recovery point, you must use the REBUILD INDEX utility to rebuild these indexes. For more information about the RECOVER and REBUILD INDEX utilities, see Part 2 of DB2 Utility Guide and Reference. The output from the DISPLAY DATABASE RESTRICT command shows the restrictive states for index spaces. See DB2 Command Reference for descriptions of status codes displayed by the DISPLAY DATABASE command.
Chapter 21. Backing up and recovering databases
501
| | | | | | | | | | | |
Recovering indexes on tables in partitioned table spaces: The partitioning of secondary indexes allows you to copy and recover at the entire index or individual partition level. However, if you use COPY at the partition level, you need to use RECOVER at the partition level also. If you use COPY at the partition level and then try to RECOVER the index, an error occurs. If COPY is performed at the index level, you can use RECOVER at either the index level or the partition level. You cannot recover an index space to a point-in-time prior to rotating partitions. After you rotate a partition, you cannot recover the contents of that partition to a point-in-time before the rotation. If you recover to a point-in-time prior to the addition of a partition, DB2 cannot roll back the addition of that partition. In such a recovery, DB2 clears all data from the partition, and it remains part of the database.
502
Administration Guide
2. Execute the following SELECT statements to find a list of table space and table definitions in the DB2 catalog: Product-sensitive Programming Interface
SELECT NAME, DBID, PSID FROM SYSIBM.SYSTABLESPACE; SELECT NAME, TSNAME, DBID, OBID FROM SYSIBM.SYSTABLES;
End of Product-sensitive Programming Interface 3. For each table space name in the catalog, look for a data set with a corresponding name. If a data set exists, take the following additional actions: a. Find the field HPGOBID in the header page section of the DSN1PRNT output. This field contains the DBID and PSID for the table space. See if the corresponding table space name in the DB2 catalog has the same DBID and PSID. b. If the DBID and PSID do not match, execute DROP TABLESPACE and CREATE TABLESPACE to replace the incorrect table space entry in the DB2 catalog with a new entry. Be sure to make the new table space definition exactly like the old one. If the table space is segmented, SEGSIZE must be identical for the old and new definitions. A LOB table space can be dropped only if it is empty (that is, it does not contain auxiliary tables). If a LOB table space is not empty, you must first drop the auxiliary table before you drop the LOB table space. To drop auxiliary tables, you can perform one of the following actions: v Drop the base table. v Delete all rows that reference LOBs from the base table, and then drop the auxiliary table. c. Find the PGSOBD fields in the data page sections of the DSN1PRNT output. These fields contain the OBIDs for the tables in the table space. For each OBID that you find in the DSN1PRNT output, search the DB2 catalog for a table definition with the same OBID. d. If any of the OBIDs in the table space do not have matching table definitions, examine the DSN1PRNT output to determine the structure of the tables that are associated with these OBIDs. If you find a table whose structure matches a definition in the catalog, but the OBIDs differ, proceed to the next step. The OBIDXLAT option of DSN1COPY corrects the mismatch. If you find a table for which no table definition exists in the catalog, re-create the table definition using the CREATE TABLE statement. To re-create a table definition for a table that has had columns added, first use the original CREATE TABLE statement, and then use ALTER TABLE to add columns, which makes the table definition match the current structure of the table. e. Use the utility DSN1COPY with the OBIDXLAT option to copy the existing data to the new tables in the table space, and translate the DBID, PSID, and OBIDs. If a table space name in the DB2 catalog does not have a data set with a corresponding name, one of the following events has probably occurred: v The table space was dropped after the point-in-time to which you recovered. In this case, you cannot recover the table space. Execute DROP TABLESPACE to delete the entry from the DB2 catalog. v The table space was defined with the DEFINE(NO) option. In this case, the data set will be allocated when you insert data into the table space.
503
4. For each data set in the DSN1PRNT output, look for a corresponding DB2 catalog entry. If no entry exists, follow the instructions in Recovery of an accidentally dropped table space on page 511 to re-create the entry in the DB2 catalog. 5. If you recover the catalog tables SYSSEQ and SYSSEQ2, identity columns and sequence objects are inconsistent. To avoid duplicate identity column values, recover all table spaces that contain tables that use identity columns to the point-in-time to which you recovered SYSSEQ and SYSSEQ2. To eliminate gaps between identity column values, use the ALTER TABLE statement. For sequence objects, use the ALTER SEQUENCE statement to eliminate these gaps. See Recovering tables that contain identity columns on page 500 for more information. 6. Ensure that the IPREFIX values of user table spaces and index spaces that were reorganized with the FASTSWITCH option match the IPREFIX value in the VSAM data set names that are associated with each table space or partition. If the IPREFIX recorded in the DB2 catalog and directory is different from the VSAM cluster names, you cannot access your data. To ensure that these IPREFIX values match, complete the following procedure: a. Query the SYSIBM.SYSTABLEPART and SYSIBM.SYSINDEXPART catalog tables to determine the IPREFIX value that is recorded in the catalog for objects that were reorganized with the FASTSWITCH option. b. Compare this IPREFIX value to the IPREFIX value in the VSAM data set name that is associated with the table space or index space. c. When IPREFIX values do not match for an object, rename the VSAM data set to specify the correct IPREFIX. Example: If the catalog specifies an IPREFIX of J for an object but the VSAM data set that corresponds to this object is catname.DSNDBC.dbname.spname.I0001.A001, you must rename this data set to catname.DSNDBC.dbname.spname.J0001.A001. 7. Delete the VSAM data sets that are associated with table spaces that were created with the DEFINE NO option and that reverted to an unallocated state. After you delete the VSAM data sets, you can insert or load rows into these unallocated table spaces to allocate new VSAM data sets. For more information about the DEFINE NO option of the CREATE TABLESPACE and CREATE INDEX SQL statements, seeDB2 SQL Reference. See Part 3 of DB2 Utility Guide and Reference for more information about DSN1COPY and DSN1PRNT.
504
Administration Guide
If the image copy data set is cataloged when the image copy is made, the entry for that copy in SYSIBM.SYSCOPY does not record the volume serial numbers of the data set. You can identify that copy by its name by using TOCOPY data set name. If the image copy data set was not cataloged when it was created, you can identify the copy by its volume serial identifier by using TOVOLUME volser.
505
To improve the performance of the recovery, take a full image copy of the page sets, and then quiesce them using the QUIESCE utility. This allows RECOVER TOLOGPOINT to recover the page sets to the quiesce point with minimal use of the log. If you are working with partitioned table spaces, image copies that were taken prior to resetting the REORG-pending status of any partition of a partitioned table space cannot be used for recovery to a current point-in-time. Avoid performing a point-in-time recovery for a partitioned table space to a point-in-time that is after the REORG-pending status was set, but before a rebalancing REORG was performed. See information about RECOVER in Part 2 of DB2 Utility Guide and Reference for details on determining an appropriate point-in-time and creating a new recovery point. If you use the REORG TABLESPACE utility with the FASTSWITCH YES option on only some partitions of a table space, you must recover that table space at the partition level. When you take an image copy of such a table space, the COPY utility issues the informational message DSNU429I. For a complete description of DSNU429I, see DB2 Messages. Authorization: Restrict the use of the TOCOPY, TOLOGPOINT, TOLASTCOPY, and TOLASTFULLCOPY options of the RECOVER utility to personnel with a thorough knowledge of the DB2 recovery environment.
Ensuring consistency
RECOVER TOLOGPOINT and RECOVER TOCOPY can be used on a single: v Partition of a partitioned table space v Partition of a partitioning index space v Data set of a simple table space All page sets must be restored to the same level; otherwise the data is inconsistent. A table space and all of its indexes (or a table space set and all related indexes) should be recovered in a single RECOVER utility statement that specifies TOLOGPOINT. The log point should identify a quiesce point or a common SHRLEVEL REFERENCE copy point. This action avoids placing indexes in the CHECK-pending or RECOVER-pending status. If the log point is not a common quiesce point or SHRLEVEL REFERENCE copy point for all objects, use the following procedure: 1. RECOVER table spaces to the log point. 2. Use concurrent REBUILD INDEX jobs to rebuild the indexes for each table space. This procedure ensures that the table spaces and indexes are synchronized and eliminates the need to run the CHECK INDEX utility. Point in time recovery can cause table spaces to be placed in CHECK-pending status if they have table check constraints or referential constraints defined on them. When recovering tables that are involved in a referential constraint, you should recover all the table spaces that are involved in a constraint. This is the table space set. To avoid setting CHECK-pending status, you must perform both of the following tasks: v Recover the table space set to a quiesce point. If you do not recover each table space of the table space set to the same quiesce point, and if any of the table spaces are part of a referential integrity structure:
506
Administration Guide
All dependent table spaces that are recovered are placed in CHECK-pending status with the scope of the whole table space. All table spaces that are dependent on the table spaces that are recovered are placed in CHECK-pending status with the scope of the specific dependent tables. v Establish a quiesce point or take an image copy after you add check constraints or referential constraints to a table. If you recover each table space of a table space set to the same quiesce point, but referential constraints were defined after the quiesce point, the CHECK-pending status is set for the table space containing the table with the referential constraint. For information about resetting the CHECK-pending status, see Violation of referential constraint recovery on page 558. The RECOVER utility sets various states on table spaces. The following point-in-time recoveries set such states on table spaces: v When the RECOVER utility finds an invalid column during the LOGAPPLY phase on a LOB table space, it sets the table space to auxiliary-warning (AUXW) status. v When you recover a LOB table space to a point-in-time that is not a quiesce point or to an image copy that is produced with SHRLEVEL CHANGE, the LOB table space is placed in CHECK-pending (CHKP) status. v When you recover only the LOB table space to any previous point-in-time, the base table space is placed in auxiliary CHECK-pending (ACHKP) status, and the index space containing an index on the auxiliary table is placed in REBUILD-pending (RBDP) status. v When you recover only the base table space to a point-in-time, the base table space is placed in CHECK-pending (CHKP) status. v When you recover only the index space containing an index on the auxiliary table to a point-in-time, the index space is placed in CHECK-pending (CHKP) status. v When you recover partitioned table spaces with the RECOVER utility to a point-in-time that is prior to a partition rebalance, all partitions that were rebalanced are placed in REORG-pending (REORP) status. v When you recover a table space to point-in-time prior to when an identity column was defined with the RECOVER utility, the table space is placed in REORG-pending (REORP) status. v If you do not recover all objects that are members of a referential set to a prior point-in-time with the RECOVER utility, or if you recover a referential set to a point-in-time that is not a point of consistency with the RECOVER utility, all dependent table spaces are placed in CHECK-pending (CHKP) status. See Part 2 of DB2 Utility Guide and Reference for detailed information about recovering a table space that contains LOB data. To remove various pending states from a table space, run the following utilities in this order: 1. Use the REORG TABLESPACE utility to remove the REORP status. 2. If the table space status is auxiliary CHECK-pending status: v Use CHECK LOB for all associated LOB table spaces. v Use CHECK INDEX for all indexes on the LOB table spaces. 3. Use the CHECK DATA utility to remove the CHECK-pending status.
Chapter 21. Backing up and recovering databases
507
Compressed data: Use caution when recovering a single data set of a non-partitioned page set to a prior point-in-time. If the data set that is recovered was compressed with a different dictionary from the rest of the page set, you can no longer read the data. For important information about loading and compressing data, see the description of LOAD in Part 2 of DB2 Utility Guide and Reference. The RECOVER utility does not reset the values that DB2 generates for identity columns. | | | | | | Important: The RECOVER utility does not back out CREATE or ALTER statements. After a recovery to a previous point-in-time, all previous alterations to identity column attributes remain unchanged. Because these alterations are not backed out, a recovery to a point-in-time might put identity column tables out of sync with the SYSIBM.SYSSEQUENCES catalog table. You might need to modify identity column attributes after a recovery to resynchronize identity columns with the catalog table.
508
Administration Guide
DBID Database identifier OBID Data object identifier PSID Table space identifier
Recommendation: To prepare for this procedure, run regular catalog reports that include a list of all OBIDs in the subsystem. In addition create catalog reports that list dependencies on the table (such as referential constraints, indexes, and so on). After a table is dropped, this information disappears from the catalog. If an OBID has been reused by DB2, you must run DSN1COPY to translate the OBIDs of the objects in the data set. However, this event is unlikely; DB2 reuses OBIDs only when no image copies exist that contain data from that table. Important: When you recover a dropped object, you essentially recover a table space to a point-in-time. If you want to use log records to perform forward recovery on the table space, you need the IBM DB2 UDB Log Analysis Tool for z/OS.
509
a. For the data set that contains the dropped table, run DSN1PRNT with the FORMAT and NODATA options. Record the HPGOBID field in the header page and the PGSOBD field from the data records in the data pages. For the auxiliary table of a LOB table space, record the HPGROID field in the header page instead of PGSOBD field in the data pages. v Field HPGOBID is 4 bytes long and contains the DBID in the first 2 bytes and the PSID in the last 2 bytes. v Field HPGROID (for LOB table spaces) contains the OBID of the table. A LOB table space can contain only one table. v Field PGSOBD (for non-LOB table spaces) is 2 bytes long and contains the OBID of the table. If your table space contains more than one table, check for all OBIDs. In other words, search for all different PGSOBD fields. You need to specify all OBIDs from the data set as input for the DSN1COPY utility. b. Convert the hex values in the identifier fields to decimal so that they can be used as input for the DSN1COPY utility. 2. Use the SQL CREATE statement to re-create the table and any indexes on the table. 3. To allow DSN1COPY to access the DB2 data set, stop the table space using the following command:
-STOP DATABASE(database-name) SPACENAM(tablespace-name)
Stopping the table space is necessary to ensure that all changes are written out and that no data updates occur during this procedure. 4. Find the OBID for the table you created in step 2 by querying the SYSIBM.SYSTABLES catalog table. The following statement returns the object ID (OBID) for the table: Product-sensitive Programming Interface
SELECT NAME, OBID FROM SYSIBM.SYSTABLES WHERE NAME='table_name' AND CREATOR='creator_name';
End of Product-sensitive Programming Interface This value is returned in decimal format, which is the format you need for DSN1COPY. 5. Run DSN1COPY with the OBIDXLAT and RESET options to perform the OBID translation and to copy data from the dropped table into the original data set. You must specify a previous full image copy data set, inline copy data set, or DSN1COPY file as the input data set SYSUT1 in the control statement. Specify each of the input records in the following order in the SYSXLAT file to perform OBID translations: a. The DBID that you recorded in step 1 as both the translation source and the translation target b. The PSID that you recorded in step 1 as both the translation source and the translation target c. The original OBID that you recorded in step 1 for the dropped table as the translation source and the OBID that you recorded in step 4 as the translation target d. OBIDs of all other tables in the table space that you recorded in step 2 as both the translation sources and translation targets
510
Administration Guide
Be sure that you have named the VSAM data sets correctly by checking messages DSN1998I and DSN1997I after DSN1COPY completes. For more information about DSN1COPY, see Part 3 of DB2 Utility Guide and Reference. 6. Use DSN1COPY with the OBIDXLAT and RESET options to apply any incremental image copies. You must apply these incremental copies in sequence, and specify the same SYSXLAT records that step 5 specifies. Important: After you complete this step, you have essentially recovered the table space to the point-in-time of the last image copy. If you want to use log records to perform forward recovery on the table space, you must use the IBM DB2 UDB Log Analysis Tool for z/OS at this point in the recovery procedure. For more information about point-in-time recovery, see Recovering data to a prior point of consistency on page 499. 7. Start the table space for normal use by using the following command:
-START DATABASE(database-name) SPACENAM(tablespace-name)
8. Rebuild all indexes on the table space. 9. Execute SELECT statements on the previously dropped table to verify that you can access the table. Include all LOB columns in these queries. 10. Make a full image copy of the table space. See Copying page sets and data sets on page 493 for more information about the COPY utility. 11. Re-create the objects that are dependent on the recovered table. As explained in Implications of dropping a table on page 89, when a table is dropped, objects that are dependent on that table (synonyms, views, indexes, referential constraints, and so on) are dropped. (Aliases are not dropped.) Privileges that are granted for that table are also dropped. Catalog reports or a copy of the catalog taken prior to the DROP TABLE can make this task easier.
511
2. Rename the data set containing the dropped table space by using the IDCAMS ALTER command. Rename both the CLUSTER and DATA portion of the data set with a name that begins with the integrated catalog facility catalog name or alias. 3. Redefine the original DB2 VSAM data sets. Use the access method services LISTCAT command to obtain a list of data set attributes. The data set attributes on the redefined data sets must be the same as they were on the original data sets. 4. Use SQL CREATE statements to re-create the table space, tables, and any indexes on the tables. 5. To allow DSN1COPY to access the DB2 data sets, stop the table space using the following command:
-STOP DATABASE(database-name) SPACENAM(tablespace-name)
This step is necessary to prevent updates to the table space during this procedure in the event that the table space has been left open. 6. Find the target identifiers of the objects you created in step 4 (which consist of a PSID for the table space and the OBIDs for the tables within that table space) by querying the SYSIBM.SYSTABLESPACE and SYSIBM.SYSTABLES catalog tables. Product-sensitive Programming Interface The following statement returns the object ID for a table space; this is the PSID.
SELECT DBID, PSID FROM SYSIBM.SYSTABLESPACE WHERE NAME='tablespace_name' and DBNAME='database_name' AND CREATOR='creator_name';
End of Product-sensitive Programming Interface These values are returned in decimal format, which is the format that you need for DSN1COPY. 7. Run DSN1COPY with the OBIDXLAT and RESET options to perform the OBID translation and to copy the data from the renamed VSAM data set that contains the dropped table space to the newly defined VSAM data set. Specify the VSAM data set that contains data from the dropped table space as the input data set SYSUT1 in the control statement. Specify each of the input records in the following order in the SYSXLAT file to perform OBID translations: a. The DBID that you recorded in step 1 on page 511 as both the translation source and the translation target b. The PSID that you recorded in step 1 on page 511 as the translation source and the PSID that you recorded in step 6 as the translation target c. The original OBIDs that you recorded in step 1 on page 511 as the translation sources and the OBIDs that you recorded in step 6 as the translation targets Be sure that you have named the VSAM data sets correctly by checking messages DSN1998I and DSN1997I after DSN1COPY completes.
512
Administration Guide
For more information about DSN1COPY, see Part 3 of DB2 Utility Guide and Reference. 8. Use DSN1COPY with the OBIDXLAT and RESET options to apply any incremental image copies to the recovered table space. You must apply these incremental copies in sequence, and specify the same SYSXLAT records that step 7 specifies. Important: After you complete this step, you have essentially recovered the table space to the point-in-time of the last image copy. If you want to use log records to perform forward recovery on the table space, you must now the IBM DB2 UDB Log Analysis Tool for z/OS. For more information about point-in-time recovery, see Recovering data to a prior point of consistency on page 499. 9. Start the table space for normal use by using the following command:
-START DATABASE(database-name) SPACENAM(tablespace-name)
10. Rebuild all indexes on the table space. 11. Execute SELECT statements on each table in the recovered table space to verify the recovery. Include all LOB columns in these queries. 12. Make a full image copy of the table space. See Copying page sets and data sets on page 493 for more information about the COPY utility. 13. Re-create the objects that are dependent on the table. See step 11 of Recovery of an accidentally dropped table on page 509 for more information. DB2-managed data sets: The following procedure recovers dropped table spaces that are part of the catalog. If a consistent full image copy or DSN1COPY file is available, you can use DSN1COPY to recover a dropped table space. To recover a dropped table space, complete the following procedure: 1. Find the original DBID for the database, the PSID for the table space, and the OBIDs of all tables contained in the dropped table space. For information about how to do this, see step 1 of Recovery of an accidentally dropped table on page 509. 2. Re-create the table space and all tables. This re-creation can be difficult when any of the following conditions are true: v A table definition is not available v A table is no longer required If you cannot re-create a table, you must use a dummy table to take its place. A dummy table is a table with an arbitrary structure of columns that you delete after you recover the dropped table space. Attention: When you use a dummy table, you lose all data from the dropped table that you do not re-create. 3. Re-create auxiliary tables and indexes if a LOB table space has been dropped. 4. To allow DSN1COPY to access the DB2 data set, stop the table space with the following command:
-STOP DATABASE(database-name) SPACENAM(tablespace-name)
5. Find the new PSID, and OBIDs by querying the DB2 catalog as described in step 6 of User-managed data sets on page 511. (Find the OBID of the dummy table that you created in step 2 if you could not re-create a table.) 6. Run DSN1COPY with the OBIDXLAT and RESET options to translate the OBID and to copy data from a previous full image copy data set, inline copy data set, or DSN1COPY file. Use one of these copies as the input data set
Chapter 21. Backing up and recovering databases
513
SYSUT1 in the control statement. Specify each of the input records in the following order in the SYSXLAT file to perform OBID translations: a. The DBID that you recorded in step 1 as both the translation source and the translation target b. The PSID that you recorded in step 1 as the translation source and the PSID that you recorded in step 5 as the translation target c. The OBIDs that you recorded in step 1 as the translation sources and the OBIDs that you recorded in step 5 as the translation targets Be sure that you name the VSAM data sets correctly by checking messages DSN1998I and DSN1997I after DSN1COPY completes. For more information about DSN1COPY, see Part 3 of DB2 Utility Guide and Reference. 7. Use DSN1COPY with the OBIDXLAT and RESET options to apply any incremental image copies to the recovered table space. You must apply these incremental copies in sequence, and specify the same SYSXLAT records that step 7 specifies. Important: After you complete this step, you have essentially recovered the table space to the point-in-time of the last image copy. If you want to use log records to perform forward recovery on the table space, you must use the IBM DB2 UDB Log Analysis Tool for z/OS at this point in the recovery procedure. For more information about point-in-time recovery, see Recovering data to a prior point of consistency on page 499. 8. Start the table space for normal by using the following command:
-START DATABASE(database-name) SPACENAM(tablespace-name)
9. Drop all dummy tables. The row structure does not match the table definition. This mismatch makes the data in these tables unusable. 10. Reorganize the table space to remove all rows from dropped tables. 11. Rebuild all indexes on the table space. 12. Execute SELECT statements on each table in the recovered table space to verify the recovery. Include all LOB columns in these queries. 13. Make a full image copy of the table space. See Copying page sets and data sets on page 493 for more information about the COPY utility. 14. Re-create the objects that are dependent on the table. See step 11 on page 511 of Recovery of an accidentally dropped table on page 509 for more information.
514
Administration Guide
want to keep. If you foresee resetting the DB2 subsystem to its status at any earlier date, you also need the image copies and log data sets that allow you to recover to that date. If the most recent image copy of an object is damaged, the RECOVER utility seeks a backup copy. If no backup copy is available, or if the backup is lost or damaged, RECOVER uses a previous image copy. It continues searching until it finds an undamaged image copy or no more image copies exist. The process has important implications for keeping archive log data sets. At the very least, you need all log records since the most recent image copy; to protect against loss of data from damage to that copy, you need log records as far back as the earliest image copy that you keep. 2. Run the MODIFY utility for each table space whose old image copies that you want to discard, using the date of the earliest image copy that you will keep. For example, you could enter:
MODIFY RECOVERY TABLESPACE dbname.tsname DELETE DATE date
The DELETE DATE option removes records that were written earlier than the given date. You can also use DELETE AGE to remove records that are older than a specified number of days. You can delete SYSCOPY records for a single partition by naming it with the DSNUM keyword. That option does not delete SYSLGRNX records and does not delete SYSCOPY records that are later than the earliest point to which you can recover the entire table space. Thus, you can still recover by partition after that point. You cannot run the MODIFY utility on a table space that is in RECOVER-pending status. | | | | | | | | | | | | | | | | | | | | | | | |
515
| | | # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # | | | |
The online utilities BACKUP SYSTEM and RESTORE SYSTEM are described in Part 2 of DB2 Utility Guide and Reference, and the stand-alone utility DSNJU003 is described in Part 3 of DB2 Utility Guide and Reference. BACKUP SYSTEM online utility: This utility invokes z/OS Version 1 Release 5 DFSMShsm services to take volume copies of the data. All DB2 data sets that are to be copied (and then recovered) must be managed by SMS. This utility works in both data-sharing and non-data sharing DB2 systems. The BACKUP SYSTEM utility requires z/OS Version 1 Release 5 or later data structures called copy pools. Because these data structures are implemented in z/OS, DB2 cannot generate copy pools automatically. Before you invoke the BACKUP SYSTEM utility, copy pools must be allocated in z/OS. For a more detailed description of how DB2 uses copy pools, see Using DFSMShsm with the BACKUP SYSTEM utility on page 37. For information about how to allocate a copy pool in z/OS, see z/OS DFSMSdfp Storage Administration Reference. The BACKUP SYSTEM utility invokes the DFSMShsm fast replication function to take volume level backups using FlashCopy. You can use the BACKUP SYSTEM utility to ease the task of managing data recovery. Choose either DATA ONLY or FULL, depending on your recovery needs. Choose FULL if you want to backup both your DB2 data and your DB2 logs. Because the BACKUP SYSTEM utility does not quiesce transactions, the system-level backup is a fuzzy copy, which might not contain committed data and might contain uncommitted data. The RESTORE SYSTEM utility uses these backups to restore databases to a given point-in-time. The DB2 data is made consistent by DB2 restart processing and the RESTORE SYSTEM utility. DB2 restart processing determines which transactions were active at the given recovery point, and writes the compensation log records for any uncommitted work that needs to be backed out. The RESTORE SYSTEM utility restores the database copy pool, and then applies the log records to bring the DB2 data to consistency. During the LOGAPPLY phase of the RESTORE SYSTEM utility, log records are applied to redo the committed work that is missing from the system-level backup, and log records are applied to undo the uncommitted work that might have been contained in the system-level backup. Using data-only system backups: The BACKUP SYSTEM DATA ONLY utility control statement creates system-level backups that contain only databases. The RESTORE SYSTEM utility uses these backups to restore databases to a given point-in-time. In this type of recovery, you lose only a few seconds of data, or none, based on the given recovery point. However, recovery time varies and might be extended due to the processing of the DB2 logs during DB2 restart and during the LOGAPPLY phase of the RESTORE SYSTEM utility. The number of logs to process depends on the amount of activity on your DB2 system between the time of the system-level backup and the given recovery point. For more information about recovering the DB2 system to a given point-in-time, see Recovery to a given point-in-time on page 517. Using full system backups: The BACKUP SYSTEM FULL utility control statement creates system-level backups that contain both logs and databases. With these
516
Administration Guide
| | | # # # # # # # # # # # # # | | | | | # # # # # # # # | | | | | | | | | | | | | | # #
copies, you can recover your DB2 system to the point-in-time of a backup using normal DB2 restart recovery, or to a given point-in-time by using the RECOVER SYSTEM utility. To recover your DB2 system to the point-in-time of a backup by using normal DB2 restart recovery, stop DB2, and then restore both the database and log copy pools outside of DB2 by using DFSMShsm FRRECOV COPYPOOL (cpname) GENERATION (gen). After you successfully restart DB2, your DB2 system has been recovered to a point of consistency based on the time of the backup. For more information about this type of recovery, see Recovery to the point-in-time of a backup on page 518. The RESTORE SYSTEM utility uses full system backup copies as input, but the utility does not restore the volumes in the log copy pool. If your situation requires that the volumes in the log copy pool be restored, you must restore the log copy pool before restarting DB2. For example, you should restore the log copy pool when you are using a full system-level backup at your remote site for disaster recovery. When you recover your DB2 system to the point-in-time of a full system backup, you could lose a few hours of data, because you are restoring your DB2 data and logs to the time of the backup. However, recovery time is brief, because DB2 restart processing and the RESTORE SYSTEM utility need to process a minimal number of logs. If you choose not to restore the log copy pool prior to running the RESTORE SYSTEM utility, the recovery is equivalent to the recovery of a system with data-only backups. In this type of recovery, you lose only a few seconds of data, or none, based on the given recovery point. However, recovery time varies and might be extended due to the processing of the DB2 logs during DB2 restart and during the LOGAPPLY phase of the RESTORE SYSTEM utility. The number of logs to process depends on the amount of activity on your DB2 system between the time of the system-level backup and the given recovery point. For more information about recovering the DB2 system to a given point-in-time, see Recovery to a given point-in-time. RESTORE SYSTEM online utility: This utility invokes z/OS Version 1 Release 5 or later DFSMShsm services to recover a DB2 system to a prior point-in-time by restoring the databases in the volume copies that have been provided by the BACKUP SYSTEM utility. After restoring the data, this utility can then recover to a given point-in-time. The SYSPITR option of DSNJU003 CRESTART allows you to create a conditional restart control record (CRCR) to truncate logs for system point-in-time recovery in preparation for running the RESTORE SYSTEM utility. If you restore the system data by some other means, use the RESTORE SYSTEM utility with the LOGONLY option to skip the restore phase, and use the CRCR to apply the logs to the restored databases.
517
# # # | | | | | | | | | | # # # | | | | | | | | | | | | | | # | | | | | | | | | | | | | | |
using the RESTORE SYSTEM utility. The RESTORE SYSTEM utility uses system-level backups that contain only DB2 objects to restore your DB2 system to a given point-in-time. Backup: You must perform the following procedure before an event occurs that creates a need to recover your DB2 system: 1. Use the BACKUP SYSTEM utility to create system-level backups. Choose either DATA ONLY or FULL, depending on your recovery needs. Choose FULL if you want to backup both your DB2 data and your DB2 logs. Recovery: If you have performed the appropriate backup procedures, you can recover your DB2 system to a given point-in-time by using the RESTORE SYSTEM utility: 1. Issue the STOP DB2 command to stop the DB2 subsystem. If your system is a data sharing group, stop all members of the group. 2. If the backup is a full system backup, you might need to restore the log copy pool outside of DB2 by using DFSMShsm FRRECOV COPYPOOL (cpname) GENERATION (gen). For data-only system backups, skip this step. 3. Run DSNJU003 (the change log inventory utility) with the CRESTART SYSPITR option specifying the log truncation point that corresponds to the point-in-time to which you want to recover the system. For data sharing systems, run DSNJU003 on all active members of the data-sharing group, and specify the same LRSN truncation point for each member. If the point-in-time that you specify for recovery is prior to the oldest system backup, you must manually restore the volume backup from tape. 4. For data sharing systems, delete all CF structures that the data sharing group owns. 5. Issue the Start DB2 command to restart your DB2 system. For data sharing systems, start all active members. 6. Run the RESTORE SYSTEM utility. If you manually restored the backup, use the LOGONLY option of the RESTORE SYSTEM utility to apply the current logs. 7. Stop and restart DB2 again to remove ACCESS(MAINT) status. After the RESTORE SYSTEM utility completes successfully, your DB2 system has been recovered to the given point-in-time with consistency.
518
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Recovery: If you have performed the appropriate backup procedures, you can use the following procedure to recover your DB2 system to the point-in-time of a backup: 1. Stop the DB2 subsystem. For data sharing systems, stop all members of the group. 2. Use the DFSMShsm command FRRECOV * COPYPOOL(cpname) GENERATION(gen) to restore the database and log copy pools that the BACKUP SYSTEM utility creates. In this command, cpname specifies the name of the copy pool, and gen specifies which version of the copy pool is restored. 3. For data sharing systems, delete all CF structures that are owned by this group. 4. Start DB2. For data sharing systems, start all active members. 5. For data sharing systems, execute the GRECP and LPL recovery, which recovers the changed data that was stored in the coupling facility at the time of the backup. Using backups from FlashCopy: To recover a DB2 system to the point-in-time of a backup that FlashCopy creates, you must perform the following backup and recovery procedures. For more information about the FlashCopy function, see z/OS DFSMS Advanced Copy Services. Backup: You must perform the following procedure before an event occurs that creates a need to recover your DB2 system: 1. Issue the DB2 command SET LOG SUSPEND to suspend logging and update activity, and to quiesce 32K page writes and data set extensions. For data sharing systems, issue the command to each member of the group. 2. Use the FlashCopy function to copy all DB2 volumes. Include any ICF catalogs that are used by DB2, as well as active logs and BSDSs. 3. Issue the DB2 command SET LOG RESUME to resume normal DB2 update activity. To save disk space, you can use DFSMSdss to dump the disk copies you just created to a lower-cost medium, such as tape. Recovery: If you have performed the appropriate backup procedures, you can use the following procedure to recover your DB2 system to the point-in-time of a backup: 1. Stop the DB2 subsystem. For data sharing systems, stop all members of the group. 2. Use DFSMSdss RESTORE to restore the FlashCopy data sets to disk. See z/OS DFSMSdss Storage Administration Reference for more information. 3. For data sharing systems, delete all CF structures that are owned by this group. 4. Start DB2. For data sharing systems, start all active members. 5. For data sharing systems, execute the GRECP and LPL recovery, which recovers the changed data that was stored in the coupling facility at the time of the backup.
519
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Recovering with BACKUP SYSTEM: The following procedures use the BACKUP SYSTEM and RESTORE SYSTEM utilities to recover your DB2 system: Preparation: 1. Use BACKUP SYSTEM FULL to take the system backup. 2. Transport the system backups to the remote site. Recovery: 1. Run the DSNJU003 utility using the control statement CRESTART CREATE, SYSPITR=log-truncation-point, where log-truncation-point is the RBA or LRSN of the point to which you want to recover. 2. Start DB2. 3. Run the RESTORE SYSTEM utility using the control statement RESTORE SYSTEM to recover to the current time (or to the time of the last log transmission from the local site). Recovering without BACKUP SYSTEM: Use the following procedures to recover your DB2 system if you do not use the BACKUP SYSTEM utility to produce backups: Preparation: 1. Issue the DB2 command SET LOG SUSPEND to suspend logging and update activity, and to quiesce 32K page writes and data set extensions. For data sharing systems, issue the command to each member of the data sharing group. 2. Use the FlashCopy function to copy all DB2 volumes. Include any ICF catalogs that are used by DB2, as well as active logs and BSDSs. 3. Issue the DB2 command SET LOG RESUME to resume normal DB2 update activity. 4. Use DFSMSdss to dump the disk copies that you just created to tape, and then transport this tape to the remote site. You can also use other methods to transmit the copies you make to the remote site. Recovery: 1. Use DFSMSdss to restore the FlashCopy data sets to disk. 2. Run the DSNJU003 utility using the control statement CRESTART CREATE, SYSPITR=log-truncation-point, where log-truncation-point is the RBA or LRSN of the point to which you want to recover. 3. Start DB2. 4. Run the RESTORE SYSTEM utility using the control statement RESTORE SYSTEM LOGONLY to recover to the current time (or to the time of the last log transmission from the local site).
520
Administration Guide
System action: If the IRLM abends, DB2 terminates. If the IRLM waits or loops, then terminate the IRLM, and DB2 terminates automatically. System programmer action: None.
Copyright IBM Corp. 1982, 2009
521
Operator action: v Start the IRLM if you did not set it for automatic start when you installed DB2. (For instructions on starting the IRLM, see Starting the IRLM on page 381.) v Start DB2. (For instructions, see Starting DB2 on page 324.) v Issue the command /START SUBSYS ssid to connect IMS to DB2. v Issue the command DSNC STRT to connect CICS to DB2. (See Connecting from CICS on page 388.)
522
Administration Guide
1. Assure that there are no incomplete I/O requests against the failing device. One way to do this is to force the volume off line by issuing the following z/OS command:
VARY xxx,OFFLINE,FORCE
where xxx is the unit address. To check disk status you can issue:
D U,DASD,ONLINE
The following console message is displayed after you have forced a volume offline:
UNIT 4B1 TYPE 3390 STATUS O-BOX VOLSER XTRA02 VOLSTATE PRIV/RSDNT
The disk unit is now available for service. If you have previously set the I/O timing interval for the device class, the I/O timing facility should terminate all incomplete requests at the end of the specified time interval, and you can proceed to the next step without varying the volume off line. You can set the I/O timing interval either through the IECIOSxx z/OS parameter library member or by issuing the z/OS command
SETIOS MIH,DEV=devnum,IOTIMING=mm:ss.
For more information about the I/O timing facility, see z/OS MVS Initialization and Tuning Reference and z/OS MVS System Commands. 2. An authorized operator issues the following command to stop all databases and table spaces residing on the affected volume:
-STOP DATABASE(database-name) SPACENAM(space-name)
If the disk unit must be disconnected for repair, all databases and table spaces on all volumes in the disk unit must be stopped. 3. Select a spare disk pack and use ICKDSF to initialize from scratch a disk unit with a different unit address (yyy) and the same volser.
// Job //ICKDSF //SYSPRINT //SYSIN REVAL EXEC PGM=ICKDSF DD SYSOUT=* DD * UNITADDRESS(yyy) VERIFY(volser)
If you are initializing a 3380 or 3390 volume, use REVAL with the VERIFY parameter to ensure you are initializing the volume you want, or to revalidate the volumes home address and record 0. Details are provided in Device Support Facilities User's Guide and Reference. Alternatively, use ISMF to initialize the disk unit. 4. Issue this z/OS console command. yyy is the new unit address.
VARY yyy,ONLINE
6. Issue the following command to start all the appropriate databases and table spaces that had been stopped previously:
-START DATABASE(database-name) SPACENAM(space-name)
7. Delete all table spaces (VSAM linear data sets) from the ICF catalog by issuing the following access method services command for each one of them:
DELETE catnam.DSNDBC.dbname.tsname.y0001.A00x CLUSTER NOSCRATCH
523
where y can be either I or J. Access method services commands are described in detail in z/OS DFSMS Access Method Services for Catalogs. 8. For user-managed table spaces, the VSAM cluster and data components must be defined for the new volume by issuing the access method services DEFINE CLUSTER command with the data set name:
catnam.DSNDBC.dbname.tsname.y0001.A00x
where y can be either I or J, and x is C (for VSAM clusters) or D (for VSAM data components). This data set is the same as defined in Step 7. Detailed requirements for user-managed data sets are described in Requirements for your own data sets on page 38. For a user-defined table space, the new data set must be defined before an attempt to recover it. Table spaces defined in storage groups can be recovered without prior definition. 9. Recover the table spaces using the RECOVER utility. Additional information and procedures for recovering data can be found in Recovering page sets and data sets on page 495.
524
Administration Guide
3. Execute RECOVER TOLOGPOINT with the RBA that you found, specifying the names of all related table spaces. Recovering all related table spaces to the same quiesce point prevents violations of referential constraints. Procedure 2: If you have not established a quiesce point If you use this procedure, you will lose any updates to the database that occurred after the last checkpoint before the application error occurred. 1. Run the DSN1LOGP stand-alone utility on the log scope available at DB2 restart, using the SUMMARY(ONLY) option. For instructions on running DSN1LOGP, see Part 3 of DB2 Utility Guide and Reference. 2. Determine the RBA of the most recent checkpoint before the first bad update occurred, from one of the following sources: v Message DSNR003I on the operators console. It looks (in part) like this:
DSNR003I RESTART ..... PRIOR CHECKPOINT RBA=000007425468
The required RBA in this example is X'7425468'. This technique works only if there have been no checkpoints since the application introduced the bad updates. v Output from the print log map utility. You must know the time that the first bad update occurred. Find the last BEGIN CHECKPOINT RBA before that time. 3. Run DSN1LOGP again, using SUMMARY(ONLY) and specify the checkpoint RBA as the value of RBASTART. The output lists the work in the recovery log, including information about the most recent complete checkpoint, a summary of all processing occurring, and an identification of the databases affected by each active user. Sample output is shown in Figure 66 on page 606. 4. One of the messages in the output (identified as DSN1151I or DSN1162I) describes the unit of recovery in which the error was made. To find the unit of recovery, use your knowledge of the time the program was run (START DATE= and TIME=), the connection ID (CONNID=), authorization ID (AUTHID=), and plan name (PLAN=). In that message, find the starting RBA as the value of START=. 5. Execute RECOVER TOLOGPOINT with the starting RBA you found in the previous step. 6. Recover any related table spaces or indexes to the same point in time. Operator action: None.
525
This message cannot be sent if the failure prevents messages from being displayed. v DB2 does not send any messages related to this problem to the z/OS console. System action: v DB2 detects that IMS has failed. v DB2 either backs out or commits work in process. v DB2 saves indoubt units of recovery. (These must be resolved at reconnection time.) System programmer action: None. Operator action: 1. Use normal IMS restart procedures, which include starting IMS by issuing the z/OS START IMS command. 2. The following results occur: v All DL/I and DB2 updates that have not been committed are backed out. v IMS is automatically reconnected to DB2. v IMS passes the recovery information for each entry to DB2 through the IMS attachment facility. (IMS indicates whether to commit or roll back.) v DB2 resolves the entries according to IMS instructions.
Problem 1
There are unresolved indoubt units of recovery. When IMS connects to DB2, DB2 has one or more indoubt units of recovery that have not been resolved. Symptom: If DB2 has indoubt units of recovery that IMS did not resolve, the following message is issued at the IMS master terminal:
DSNM004I RESOLVE INDOUBT ENTRY(S) ARE OUTSTANDING FOR SUBSYSTEM xxxx
When this message is issued, IMS was either cold started or it was started with an incomplete log tape. This message could also be issued if DB2 or IMS had an abend due to a software error or other subsystem failure. System action: v The connection remains active. v IMS applications can still access DB2 databases. v Some DB2 resources remain locked out.
526
Administration Guide
If the indoubt thread is not resolved, the IMS message queues can start to back up. If the IMS queues fill to capacity, IMS terminates. Therefore, users must be aware of this potential difficulty and must monitor IMS until the indoubt units of work are fully resolved. System programmer action: 1. Force the IMS log closed using /DBR FEOV, and then archive the IMS log. Use the command DFSERA10 to print the records from the previous IMS log tape for the last transaction processed in each dependent region. Record the PSB and the commit status from the X'37' log containing the recovery ID. 2. Run the DL/I batch job to back out each PSB involved that has not reached a commit point. The process might take some time because transactions are still being processed. It might also lock up a number of records, which could impact the rest of the processing and the rest of the message queues. 3. Enter the DB2 command DISPLAY THREAD (imsid) TYPE (INDOUBT). 4. Compare the NIDs (IMSID + OASN in hexadecimal) displayed in the DISPLAY THREAD messages with the OASNs (4 bytes decimal) shown in the DFSERA10 output. Decide whether to commit or roll back. 5. Use DFSERA10 to print the X'5501FE' records from the current IMS log tape. Every unit of recovery that undergoes indoubt resolution processing is recorded; each record with an 'IDBT' code is still indoubt. Note the correlation ID and the recovery ID, because they will be used during step 6. 6. Enter the following DB2 command, choosing to commit or roll back, and specifying the correlation ID:
-RECOVER INDOUBT (imsid) ACTION(COMMIT|ABORT) NID (nid)
If the command is rejected because there are more network IDs associated, use the same command again, substituting the recovery ID for the network ID. (For a description of the OASN and the NID, see Duplicate correlation IDs on page 396.) Operator action: Contact the system programmer.
Problem 2
Committed units of recovery should be aborted. At the time IMS connects to DB2, DB2 has committed one or more indoubt units of recovery that IMS says should be rolled back. Symptom: By DB2 restart time, DB2 has committed and rolled back those units of recovery about which DB2 was not indoubt. DB2 records those decisions, and at connect time, verifies that they are consistent with the IMS/TM decisions. An inconsistency can occur when the DB2 RECOVER INDOUBT command is used before IMS attempted to reconnect. If this happens, the following message is issued at the IMS master terminal:
DSNM005I IMS/TM RESOLVE INDOUBT PROTOCOL PROBLEM WITH SUBSYSTEM xxxx
Because DB2 tells IMS to retain the inconsistent entries, the following message is issued when the resolution attempt ends:
DFS3602I xxxx SUBSYSTEM RESOLVE-INDOUBT FAILURE, RC=yyyy
527
System action: v The connection between DB2 and IMS remains active. v DB2 and IMS continue processing. v No DB2 locks are held. v No units of work are in an incomplete state. System programmer action: Do not use the DB2 command RECOVER INDOUBT. The problem is that DB2 was not indoubt but should have been. Database updates have most likely been committed on one side (IMS or DB2) and rolled back on the other side. (For a description of the OASN and the NID, see Duplicate correlation IDs on page 396.) 1. Enter the IMS command /DISPLAY OASN SUBSYS DB2 to display the IMS list of units of recovery that need to be resolved. The /DISPLAY OASN SUBSYS DB2 command produces the OASNs in a decimal format, not a hexadecimal format. 2. Issue the IMS command /CHANGE SUBSYS DB2 RESET to reset all the entries in the list. (No entries are passed to DB2.) 3. Use DFSERA10 to print the log records recorded at the time of failure and during restart. Look at the X'37', X'56', and X'5501FE' records at reconnect time. Notify the IBM support center about the problem. 4. Determine what the inconsistent unit of recovery was doing by using the log information, and manually make the DL/I and DB2 databases consistent. Operator action: None.
Problem 1
An IMS application abends. Symptom: The following messages appear at the IMS master terminal and at the LTERM that entered the transaction involved:
DFS555 - TRAN tttttttt ABEND (SYSIDssss); MSG IN PROCESS: xxxx (up to 78 bytes of data) timestamp DFS555A - SUBSYSTEM xxxx OASN yyyyyyyyyyyyyyyy STATUS COMMIT|ABORT
System action: The failing unit of recovery is backed out by both DL/I and DB2. The connection between IMS and DB2 remains active. System programmer action: None. Operator action: If you think the problem was caused by a user error, refer to Part 2 of DB2 Application Programming and SQL Guide. For procedures to diagnose DB2 problems, rather than user errors, refer to Part 3 of DB2 Diagnosis Guide and Reference. If necessary, contact the IBM support center for assistance.
Problem 2
DB2 has failed or is not running. Symptom: One of the following status situations exists: v If you specified error option Q, the program terminates with a U3051 user abend completion code.
528
Administration Guide
v If you specified error option A, the program terminates with a U3047 user abend completion code. In both cases, the master terminal receives a message (IMS message number DFS554), and the terminal involved also receives a message (DFS555). System action: None. System programmer action: None. Operator action: 1. Restart DB2. 2. Follow the standard IMS procedures for handling application abends.
tranid can represent any abending CICS transaction and abcode is the abend code. System action: The failing unit of recovery is backed out in both CICS and DB2. The connection remains. System programmer action: None. Operator action: 1. For information about the CICS attachment facility abend, refer to Part 2 of DB2 Messages. 2. For an AEY9 abend, start the CICS attachment facility. 3. For an ASP7 abend, determine why the CICS SYNCPOINT was unsuccessful. 4. For other abends, see DB2 Diagnosis Guide and Reference or CICS Transaction Server for z/OS Problem Determination Guide for diagnostic procedures.
529
v CICS waits or loops. Because DB2 cannot detect a wait or loop in CICS, you must find the origin of the wait or the loop. The origin can be in CICS, CICS applications, or in the CICS attachment facility. For diagnostic procedures for waits and loops, see Part 2 of DB2 Diagnosis Guide and Reference. v CICS abends. CICS issues messages indicating an abend occurred and requests abend dumps of the CICS region. See CICS Transaction Server for z/OS Problem Determination Guide for more information. If threads are connected to DB2 when CICS terminates, DB2 issues message DSN3201I. The message indicates that DB2 end-of-task (EOT) routines have been run to clean up and disconnect any connected threads. System action: DB2 performs each of the following actions: Detects the CICS failure. Backs out inflight work. Saves indoubt units of recovery to be resolved when CICS is reconnected. Operator action: 1. Correct the problem that caused CICS to terminate abnormally. 2. Do an emergency restart of CICS. The emergency restart performs each of the following actions: v Backs out inflight transactions that changed CICS resources v Remembers the transactions with access to DB2 that might be indoubt. 3. Start the CICS attachment facility by entering the appropriate command for your release of CICS. See Connecting from CICS on page 388. The CICS attachment facility performs the following actions: v Initializes and reconnects to DB2. v Requests information from DB2 about the indoubt units of recovery and passes the information to CICS. v Allows CICS to resolve the indoubt units of recovery.
530
Administration Guide
2. The CICS attachment facility initializes and reconnects to DB2. 3. The CICS attachment facility requests information about the indoubt units of recovery and passes the information to CICS. 4. CICS resolves the indoubt units of recovery.
DSN2034I
DSN2035I
DSN2036I
CICS retains details of indoubt units of recovery that were not resolved during connection start up. An entry is purged when it no longer appears on the list presented by DB2 or, when present, DB2 solves it. System programmer action: Any indoubt unit of recovery that CICS cannot resolve must be resolved manually by using DB2 commands. This manual procedure should be used rarely within an installation, because it is required only where operational errors or software problems have prevented automatic resolution. Any inconsistencies found during indoubt resolution must be investigated.
531
To recover an indoubt unit, follow these steps: Step 1: Obtain a list of the indoubt units of recovery from DB2: Issue the following command:
-DISPLAY THREAD (connection-name) TYPE (INDOUBT)
The corr_id (correlation ID) for CICS Transaction Server for z/OS 1.1 and previous releases of CICS consists of: Byte 1 Connection type: G = group, P = pool Byte 2 Thread type: T = transaction (TYPE=ENTRY), G = group, C = command (TYPE=COMD) Bytes 3, 4 Thread number Bytes 5 - 8 Transaction ID The corr_id (correlation ID) for CICS Transaction Server for z/OS 1.2 and subsequent releases of CICS consists of: Bytes 1 - 4 Thread type: COMD, POOL, or ENTR Bytes 5 - 8 Transaction ID Bytes 9 - 12 Unique thread number Two threads can sometimes have the same correlation ID when the connection has been broken several times and the indoubt units of recovery have not been resolved. In this case, the network ID (NID) must be used instead of the correlation ID to uniquely identify indoubt units of recovery. The network ID consists of the CICS connection name and a unique number provided by CICS at the time the syncpoint log entries are written. This unique number is an 8-byte store clock value that is stored in records written to both the CICS system log and to the DB2 log at syncpoint processing time. This value is referred to in CICS as the recovery token. Step 2: Scan the CICS log for entries related to a particular unit of recovery: To do this, search the CICS log, looking for a PREPARE record (JCRSTRIDX'F959'), for the task-related installation where the recovery token field (JCSRMTKN) equals the value obtained from the network-ID. The network ID is supplied by DB2 in the DISPLAY THREAD command output.
532
Administration Guide
Locating the prepare log record in the CICS log for the indoubt unit of recovery provides the CICS task number. All other entries on the log for this CICS task can be located using this number. CICS journal print utility DFHJUP can be used when scanning the log. See CICS Transaction Server for z/OS Operations and Utilities Guide for details on how to use this program. Step 3: Scan the DB2 log for entries related to a particular unit of recovery: To do this, scan the DB2 log to locate the End Phase 1 record with the network ID required. Then use the URID from this record to obtain the rest of the log records for this unit of recovery. When scanning the DB2 log, note that the DB2 start up message DSNJ099I provides the start log RBA for this session. The DSN1LOGP utility can be used for that purpose. See Part 3 of DB2 Utility Guide and Reference for details on how to use this program. Step 4: If needed, do indoubt resolution in DB2: DB2 can be directed to take the recovery action for an indoubt unit of recovery using a DB2 RECOVER INDOUBT command. Where the correlation ID is unique, use the following command:
DSNC -RECOVER INDOUBT (connection-name) ACTION (COMMIT/ABORT) ID (correlation-id)
If the transaction is a pool thread, use the value of the correlation ID (corr_id) returned by DISPLAY THREAD for thread#.tranid in the command RECOVER INDOUBT. In this case, the first letter of the correlation ID is P. The transaction ID is in characters five through eight of the correlation ID. If the transaction is assigned to a group (group is a result of using an entry thread), use thread#.groupname instead of thread#.tranid. In this case, the first letter of the correlation ID is a G and the group name is in characters five through eight of the correlation ID. groupname is the first transaction listed in a group. Where the correlation ID is not unique, use the following command:
DSNC -RECOVER INDOUBT (connection-name) ACTION (COMMIT/ABORT) NID (network-id)
When two threads have the same correlation ID, use the NID keyword instead of the ID keyword. The NID value uniquely identifies the work unit. To recover all threads associated with connection-name, omit the ID option. The command results that are in either of the following messages to indicate whether the thread is committed or rolled back:
DSNV414I - THREAD thread#.tranid COMMIT SCHEDULED DSNV415I - THREAD thread#.tranid ABORT SCHEDULED
When performing indoubt resolution, note that CICS and the attachment facility are not aware of the commands to DB2 to commit or abort indoubt units of recovery, because only DB2 resources are affected. However, CICS keeps details
533
about the indoubt threads that could not be resolved by DB2. This information is purged either when the list presented is empty, or when the list does not include a unit of recovery that CICS remembers. Operator action: Contact the system programmer.
System action: v IMS and CICS continue. v In-process CICS and IMS applications receive SQLCODE -923 (SQLSTATE '57015') when accessing DB2.
534
Administration Guide
In most cases, if an IMS or CICS application program is running when a -923 SQLCODE is returned, an abend occurs. This is because the application program generally terminates when it receives a -923 SQLCODE. To terminate, some synchronization processing occurs (such as a commit). If DB2 is not operational when synchronization processing is attempted by an application program, the application program abends. In-process applications can abend with an abend code X'04F'. v New IMS applications are handled according to the error options. For option R, SQL return code -923 is sent to the application, and IMS pseudo abends. For option Q, the message is enqueued again and the transaction abends. For option A, the message is discarded and the transaction abends. v New CICS applications are handled as follows: If the CICS attachment facility has not terminated, the application receives a -923 SQLCODE. If the CICS attachment facility has terminated, the application abends (code AEY9). Operator action: 1. Restart DB2 by issuing the command START DB2. 2. Reestablish the IMS connection by issuing the IMS command /START SUBSYS DB2. 3. Reestablish the CICS connection by issuing the CICS attachment facility command DSNC STRT. System programmer action: 1. Use the IFCEREP1 service aid to obtain a listing of the SYS1.LOGREC data set containing the SYS1.LOGREC entries. (For more information about this service aid, refer to the z/OS diagnostic techniques publication about SYS1.LOGREC.) 2. If the subsystem termination was due to a failure, collect material to determine the reason for failure (console log, dump, and SYS1.LOGREC).
535
If the active log fills to capacity, after having switched to single logging, the following message is issued, and an offload is started. The DB2 subsystem then halts processing until an offload has completed.
DSNJ111E - OUT OF SPACE IN ACTIVE LOG DATA SETS
Corrective action is required before DB2 can continue processing. System action: DB2 waits for an available active log data set before resuming normal DB2 processing. Normal shutdown, with either QUIESCE or FORCE, is not possible because the shutdown sequence requires log space to record system events related to shutdown (for example, checkpoint records). Operator action: Make sure offload is not waiting for a tape drive. If it is, mount a tape and DB2 will process the offload command. If you are uncertain about what is causing the problem, enter the following command:
-ARCHIVE LOG CANCEL OFFLOAD
This command causes DB2 to restart the offload task. This might solve the problem. If this command does not solve the problem, you must determine the cause of the problem and then reissue the command again. If the problem cannot be solved quickly, have the system programmer define additional active logs. System programmer action: Additional active log data sets can permit DB2 to continue its normal operation while the problem causing the offload failures is corrected. 1. Use the z/OS command CANCEL command to bring DB2 down. 2. Use the access method services DEFINE command to define new active log data sets. Run utility DSNJLOGF to initialize the new active log data sets. To minimize the number of offloads taken per day in your installation, consider increasing the size of the active log data sets. 3. Define the new active log data sets in the BSDS by using the change log inventory utility (DSNJU003). For additional details, see Part 3 of DB2 Utility Guide and Reference.
536
Administration Guide
4. Restart DB2. Offload is started automatically during startup, and restart processing occurs.
System action: v Marks the failing log data set TRUNCATED in the BSDS. v Goes on to the next available data set. v If dual active logging is used, truncates the other copy at the same point. v The data in the truncated data set is offloaded later, as usual. v The data set is not stopped. It is reused on the next cycle. However, if there is a DSNJ104 message indicating that there is a CATUPDT failure, then the data set is marked stopped. System programmer action: If you get the DSNJ104 message indicating CATUPDT failure, you must use access method services and the change log inventory utility (DSNJU003) to add a replacement data set. This requires that you bring DB2 down. When you do this depends on how widespread the problem is. v If the problem is localized and does not affect your ability to recover from any further problems, you can wait until the earliest convenient time. v If the problem is widespread (perhaps affecting an entire set of active log data sets), take DB2 down after the next offload. For instructions on using the change log inventory utility, see Part 3 of DB2 Utility Guide and Reference.
Having completed one active log data set, DB2 found that the subsequent (COPY n) data sets were not offloaded or were marked stopped. System action: Continues in single mode until offloading completes, then returns to dual mode. If the data set is marked stopped, however, then intervention is required. System programmer action: Check that offload is proceeding and is not waiting for a tape mount. It might be necessary to run the print log map utility to determine the status of all data sets. If there are stopped data sets, you must use IDCAMS to delete the data sets, and then re-add them using the change log inventory utility (DSNJU003). See Part 3 of DB2 Utility Guide and Reference for information about using the change log inventory utility.
System action:
Chapter 22. Recovery scenarios
537
v If the error occurs during offload, offload tries to pick the RBA range from a second copy. If no second copy exists, the data set is stopped. If the second copy also has an error, only the original data set that triggered the offload is stopped. Then the archive log data set is terminated, leaving a discontinuity in the archived log RBA range. The following message is issued.
DSNJ124I - OFFLOAD OF ACTIVE LOG SUSPENDED FROM RBA xxxxxx TO RBA xxxxxx DUE TO I/O ERROR
If the second copy is satisfactory, the first copy is not stopped. v If the error occurs during recovery, DB2 provides data from specific log RBAs requested from another copy or archive. If this is unsuccessful, recovery fails and the transaction cannot complete, but no log data sets are stopped. However, the table space being recovered is not accessible. System programmer action: If the problem occurred during offload, determine which databases are affected by the active log problem and take image copies of those. Then proceed with a new log data set. Also, you can use IDCAMS REPRO to archive as much of the stopped active log data set as possible. Then run the change log inventory utility to notify the BSDS of the new archive log and its log RBA range. Repairing the active log does not solve the problem, because offload does not go back to unload it. If the active log data set has been stopped, it is not used for logging. The data set is not deallocated; it is still used for reading. If the data set is not stopped, an active log data set should nevertheless be replaced if persistent errors occur. The operator is not told explicitly whether the data set has been stopped. To determine the status of the active log data set, run the print log map utility (DSNJU004). For more information about the print log map utility, see Part 3 of DB2 Utility Guide and Reference. To replace the data set, take the following steps: 1. Be sure the data is saved. If you have dual active logs, the data is saved on the other active log and it becomes your new data set. Skip to step 4 on page 539. If you have not been using dual active logs, take the following steps to determine whether the data set with the error has been offloaded: a. Use the print log map to list information about the archive log data sets from the BSDS. b. Search the list for a data set whose RBA range includes the range of the data set with the error. 2. If the data set with the error has been offloaded (that is, if the value for High RBA offloaded in the print log map output is greater than the RBA range of the data set with the error), you need to manually add a new archive log to the BSDS using the change log inventory utility (DSNJU003). Use IDCAMS to define a new log having the same LRECL and BLKSIZE values as that defined in DSNZPxxx. You can use the access method services REPRO command to copy a data set with the error to the new archive log. If the archive log is not cataloged, DB2 can locate it from the UNIT and VOLSER values in the BSDS. 3. If an active log data set has been stopped, an RBA range has not been offloaded; copy from the data set with the error to a new data set. If additional
538
Administration Guide
I/O errors prevent you from copying the entire data set, a gap occurs in the log and restart might fail, though the data still exists and is not overlaid. If this occurs, see Chapter 23, Recovery from BSDS or log failure during restart, on page 597. 4. Stop DB2, and use change log inventory to update information in the BSDS about the data set with the error. a. Use DELETE to remove information about the bad data set. b. Use NEWLOG to name the new data set as the new active log data set and to give it the RBA range that was successfully copied. The DELETE and NEWLOG operations can be performed by the same job step; put the DELETE statement before the NEWLOG statement in the SYSIN input data set. This step will clear the stopped status and DB2 will eventually archive it. 5. Delete the data set in error by using access method services. 6. Redefine the data set so you can write to it. Use access method services DEFINE command to define the active log data sets. Run utility DSNJLOGF to initialize the active log data sets. If using dual logs, use access method services REPRO to copy the good log into the redefined data set so that you have two consistent, correct logs again.
z/OS dynamic allocation provides the ERROR STATUS. If the allocation was for offload processing, the message is also displayed:
DSNJ115I - OFFLOAD FAILED, COULD NOT ALLOCATE AN ARCHIVE DATA SET
System action: One of the system actions occurs: v The RECOVER utility is executing and requires an archive log. If neither log can be found or used, recovery fails. v The active log became full and an offload was scheduled. Offload tries again the next time it is triggered. The active log does not wrap around; therefore, if there are no more active logs, data is not going to be lost. v The input is needed for restart, which fails; refer to Chapter 23, Recovery from BSDS or log failure during restart, on page 597. Operator action: Check the allocation error code for the cause of the problem and correct it. Ensure that drives are available and run the recovery job again. Caution must be exercised if a DFSMSdfp ACS user-exit filter has been written for an archive log data set, because this can cause the DB2 subsystem to fail on a device allocation error attempting to read the archive log data set.
539
If in single mode, it abandons the output data set. Another attempt to offload this RBA range is made the next time offload is triggered. The active log does not wrap around; if there are no more active logs, data is not lost. Operator action: Ensure that offload is allocated on a good drive and control unit.
The failure is preceded by z/OS ABEND messages IEC030I, IEC031I, or IEC032I. System action: DB2 deallocates the data set on which the error occurred. If in dual archive mode, DB2 changes to single archive mode and continues the offload. If the offload cannot complete in single archive mode, the active log data sets cannot be offloaded, and the status of the active log data sets remains NOTREUSEABLE. Another attempt to offload the RBA range of the active log data sets is made the next time offload is invoked. System programmer action: If DB2 is operating with restricted active log resources (see message DSNJ110E), quiesce the DB2 subsystem to restrict logging activity until the z/OS ABEND is resolved.
540
Administration Guide
This message is generated for a variety of reasons. When accompanied by the preceding z/OS abends, the most likely failures are as follows: v The size of the archive log data set is too small to contain the data from the active log data sets during offload processing. All secondary space allocations have been used. This condition is normally accompanied by z/OS ABEND message IEC030I. To solve the problem, increase the primary or secondary allocations (or both) for the archive log data set in DSNZPxxx. Another option is to reduce the size of the active log data set. If the data to be offloaded is particularly large, you can mount another online storage volume or make one available to DB2. Modifications to DSNZPxxx require that you stop and start DB2 to take effect. v All available space on the disk volumes to which the archive data set is being written has been exhausted. This condition is normally accompanied by z/OS ABEND message IEC032I. To solve the problem, make space available on the disk volumes, or make available another online storage volume for DB2. Then issue the DB2 command ARCHIVE LOG CANCEL OFFLOAD to get DB2 to retry the offload. v The primary space allocation for the archive log data set (as specified in the load module for subsystem parameters) is too large to allocate to any available online disk device. This condition is normally accompanied by z/OS ABEND message IEC032I. To solve the problem, make space available on the disk volumes, or make available another online storage volume for DB2. If this is not possible, an adjustment to the value of PRIQTY in the DSNZPxxx module is required to reduce the primary allocation. (For instructions, see Part 2 of DB2 Installation Guide. If the primary allocation is reduced, the size of the secondary space allocation might have to be increased to avoid future IEC030I abends.
*DSNJ153E ( DSNJR006 CRITICAL LOG READ ERROR CONNECTION-ID = TEST0001 CORRELATION-ID = CTHDCORID001 LUWID = V71A.SYEC1DB2.B3943707629D=10 REASON-CODE = 00D10345
You can attempt to recover from temporary failures by issuing a positive reply to message:
*26 DSNJ154I ( DSNJR126 REPLY Y TO RETRY LOG READ REQUEST, N TO ABEND
Chapter 22. Recovery scenarios
541
If the problem persists, quiesce other work in the system before replying N, which terminates DB2.
System action: The BSDS mode changes from dual to single. System programmer action: 1. Use access method services to rename or delete the damaged BSDS and to define a new BSDS with the same name as the failing BSDS. Control statements can be found in job DSNTIJIN. 2. Issue the DB2 command RECOVER BSDS to make a copy of the good BSDS in the newly allocated data set and to reinstate dual BSDS mode.
The error status is VSAM return code/feedback. For information about VSAM codes, refer to z/OS DFSMS: Macro Instructions for Data Sets. System action: None. System programmer action: 1. Use access method services to delete or rename the damaged data set, to define a replacement data set, and to copy the remaining BSDS to the replacement with the REPRO command. 2. Use the command START DB2 to start the DB2 subsystem.
542
Administration Guide
Unequal timestamps can occur for the following reasons: v One of the volumes containing the BSDS has been restored. All information of the restored volume is down-level. If the volume contains any active log data sets or DB2 data, their contents are also down-level. The down-level volume has the lower timestamp. For information about resolving this problem, see Failure during a log RBA read request on page 615. v Dual BSDS mode has degraded to single BSDS mode, and you are trying to start without recovering the bad BSDS. v The DB2 subsystem abended after updating one copy of the BSDS, but prior to updating the second copy. System action: DB2 attempts to resynchronize the BSDS data sets and restore dual BSDS mode. If this attempt succeeds, DB2 restart continues automatically. Operator action: If DB2 restart fails, notify the system programmer. System programmer action: If DB2 fails to automatically resynchronize the BSDS data sets, perform the following procedure: 1. Run the print log map utility (DSNJU004) on both copies of the BSDS; compare the lists to determine which copy is accurate or current. 2. Rename the down-level data set and define a replacement for it. 3. Copy the good data set to the replacement data set, using the REPRO command of access method services. 4. Use access method services REPRO to copy the current version of the active log to the down-level data set if all the following conditions are true: v The problem was caused by a restored down-level BSDS volume. v The restored volume contains active log data. v You were using dual active logs on separate volumes. If you were not using dual active logs, you must cold start the subsystem. (For this procedure, see Failure resulting from total or excessive loss of log data on page 619). If the restored volume contains database data, use the RECOVER utility to recover that data after successful restart.
543
Archive log name DSN.ARCHLOG1.A0000001 BSDS copy name DSN.ARCHLOG1.B0000001 v If archive logs are on tape, the BSDS is the first data set of the first archive log volume. The BSDS is not repeated on later volumes. 2. If the most recent archive log data set has no copy of the BSDS (presumably because an error occurred when offloading it), then locate an earlier copy of the BSDS from an earlier offload. 3. Rename any damaged BSDS by using the access method services ALTER command with the NEWNAME option. If the decision is made to delete any damaged BSDS, use the access method services DELETE command. For each damaged BSDS, use access method services to define a new BSDS as a replacement data set. Job DSNTIJIN contains access method services control statements to define a new BSDS. The BSDS is a VSAM key-sequenced data set that has three components: cluster, index, and data. You must rename all components of the data set. Avoid changing the high-level qualifier. See z/OS DFSMS Access Method Services for Catalogs for detailed information about using the access method services ALTER command. 4. Use the access method services REPRO command to copy the BSDS from the archive log to one of the replacement BSDSs you defined in step 3. Do not copy any data to the second replacement BSDS; data is placed in the second replacement BSDS in a later step in this procedure. a. Print the contents of the replacement BSDS. Use the print log map utility (DSNJU004) to print the contents of the replacement BSDS. This enables you to review the contents of the replacement BSDS before continuing your recovery work. b. Update the archive log data set inventory in the replacement BSDS. Examine the print log map output and note that the replacement BSDS does not obtain a record of the archive log from which the BSDS was copied. If the replacement BSDS is a particularly old copy, it is missing all archive log data sets that were created later than the BSDS backup copy. Thus, the BSDS inventory of the archive log data sets must be updated to reflect the current subsystem inventory. Use the change log inventory utility (DSNJU003) NEWLOG statement to update the replacement BSDS, adding a record of the archive log from which the BSDS was copied. Make certain the CATALOG option of the NEWLOG statement is properly set to CATALOG = YES if the archive log data set is cataloged. Also, use the NEWLOG statement to add any additional archive log data sets that were created later than the BSDS copy. c. Update DDF information in the replacement BSDS. If your installations DB2 is part of a distributed network, the BSDS contains the DDF control record. You must review the contents of this record in the output of the print log map utility. If changes are required, use the change log inventory DDF statement to update the BSDS DDF record. d. Update the active log data set inventory in the replacement BSDS. In unusual circumstances, your installation could have added, deleted, or renamed active log data sets since the BSDS was copied. In this case, the replacement BSDS does not reflect the actual number or names of the active log data sets your installation has currently in use.
544
Administration Guide
If you must delete an active log data set from the replacement BSDS log inventory, use the change log inventory utility DELETE statement. If you need to add an active log data set to the replacement BSDS log inventory, use the change log inventory utility NEWLOG statement. Be certain that the RBA range is specified correctly on the NEWLOG statement. If you must rename an active log data set in the replacement BSDS log inventory, use the change log inventory utility DELETE statement, followed by the NEWLOG statement. Be certain that the RBA range is specified correctly on the NEWLOG statement. e. Update the active log RBA ranges in the replacement BSDS. Later, when a restart is performed, DB2 compares the RBAs of the active log data sets listed in the BSDS with the RBAs found in the actual active log data sets. If the RBAs do not agree, DB2 does not restart. The problem is magnified when a particularly old copy of the BSDS is used. To resolve this problem, you can use the change log inventory utility to change the RBAs found in the BSDS to the RBAs in the actual active log data sets. Use the following procedure to change RBAs in the BSDS: v If you are not certain of the RBA range of a particular active log data set, use DSN1LOGP to print the contents of the active log data set. Obtain the logical starting and ending RBA values for the active log data set from the DSN1LOGP output. The STARTRBA value you use in the change log inventory utility must be at the beginning of a control interval. Similarly, the ENDRBA value you use must be at the end of a control interval. To get these values, round the starting RBA value from the DSN1LOGP output down so that it ends in X'000'. Round the ending RBA value up so that it ends in X'FFF'. v When the RBAs of all active log data sets are known, compare the actual RBA ranges with the RBA ranges found in the BSDS (listed in the print log map utility output). If the RBA ranges are equal for all active log data sets, you can proceed to the next recovery step without any additional work. If the RBA ranges are not equal, then the values in the BSDS must be adjusted to reflect the actual values. For each active log data set that needs to have the RBA range adjusted, use the change log inventory utility DELETE statement to delete the active log data set from the inventory in the replacement BSDS. Then use the NEWLOG statement to redefine the active log data set to the BSDS. f. If only two active log data sets are specified in the replacement BSDS, add a new active log data set for each copy of the active log and define each new active log data set of the replacement BSDS log inventory. If only two active log data sets are specified for each copy of the active log, DB2 can have difficulty during restart. The difficulty can arise when one of the active log data sets is full and has not been offloaded, while the second active log data set is close to filling. Adding a new active log data set for each copy of the active log can alleviate difficulties on restart in this scenario. To add a new active log data set for each copy of the active log, use the access method services DEFINE command to define a new active log data set for each copy of the active log. The control statements to accomplish this task can be found in job DSNTIJIN. Once the active log data sets are physically defined and allocated, use the change log inventory utility
545
NEWLOG statement to define the new active log data sets of the replacement BSDS. The RBA ranges need not be specified on the NEWLOG statement. 5. Copy the updated BSDS copy to the second new BSDS data set. The dual bootstrap data sets are now identical. You should consider using the print log map utility (DSNJU004) to print the contents of the second replacement BSDS at this point. 6. See Chapter 23, Recovery from BSDS or log failure during restart, on page 597 for information about what to do if you have lost your current active log data set. For a discussion of how to construct a conditional restart record, see Step 4: Truncate the log at the point of error on page 607. 7. Restart DB2, using the newly constructed BSDS. DB2 determines the current RBA and what active logs need to be archived.
where rrrr is an z/OS dynamic allocation reason code. For information about these reason codes, see z/OS MVS Programming: Authorized Assembler Services Guide. Symptom 2: The following messages indicate a problem at open:
IEC161I rc[(sfi)] - ccc, iii, sss, ddn, ddd, ser, xxx, dsn, cat
where: rc sfi ccc iii sss ddn ddd ser xxx dsn cat
Is Is Is Is Is Is Is Is Is Is Is
a return code subfunction information (sfi only appears with certain return codes) a function code a job name a step name a ddname a device number (if the error is related to a specific device) a volume serial number (if the error is related to a specific volume) a VSAM cluster name a data set name a catalog name.
For information about these codes, see z/OS MVS System Messages Volumes 1-10.
DSNB204I - OPEN OF DATA SET FAILED. DSNAME = dsn
System action: v The table space is automatically stopped. v Programs receive an -904 SQLCODE (SQLSTATE '57011'). v If the problem occurs during restart, the table space is marked for deferred restart, and restart continues. The changes are applied later when the table space is started. System programmer action: None. Operator action:
546
Administration Guide
1. Check reason codes and correct. 2. Ensure that drives are available for allocation. 3. Enter the command START DATABASE. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
8. Issue an SQL statement to first insert rows into one of the tables and then to update and delete some rows. 9. Issue the STOP DB2 command to stop DB2 and all active members of the data sharing group. 10. Run the DSNJU003 change log inventory utility to create a SYSPITR CRCR record (CRESTART CREATE SYSPITR=logpoint1). The log truncation point is the value that you obtained from issuing either the SET LOG SUSPEND command, or the SET LOG RESUME command. 11. For a data sharing group, delete all of the coupling facility structures.
Chapter 22. Recovery scenarios
547
# # # # # # # # # # # # # # # # # # # #
12. Issue the START DB2 command to restart DB2 and all members of the data sharing group. 13. Run the RESTORE SYSTEM utility. For a data sharing group, this utility can be run on only one member. If the utility stops and you must restart it, you can restart the utility only on the member on which it was initially run. 14. After the RESTORE SYSTEM utility completes successfully, issue the STOP DB2 command to stop DB2 and all active members of the data sharing group. The DB2 subsystem resets to RECOVER-pending status. 15. Issue the START DB2 command to restart DB2 and all members of the data sharing group. 16. Issue the DISPLAY command to identify the utilities that are active and the objects that are restricted. For example:
-DIS UTIL(*) -DIS DB(DSNDB01) SP(*) -DIS DB(DSNDB06) SP(*) LIMIT(*) -DIS DB(DSNDB06) SP(*) LIMIT(*)RESTRICT
17. Stop all of the active utilities that you identified in the previous step. 18. Recover any objects that are in RECOVER-pending status or REBUILD-pending status from the table that you created in step 6 on page 547.
548
Administration Guide
The message contains also the level ID of the data set, the level ID that DB2 expects, and the name of the data set. System action: v If the error was reported during mainline processing, DB2 sends back a resource unavailable SQLCODE to the application and a reason code explaining the error. v If the error was detected while a utility was processing, the utility gives a return code 8. System programmer action: You can recover by using any of the following methods: If the message occurs during restart: v Replace the data set with one at the proper level, using DSN1COPY, DFSMShsm, or some equivalent method. To check the level ID of the new data set, run the stand-alone utility DSN1PRNT on it, with the options PRINT(0) (to print only the header page) and FORMAT. The formatted print identifies the level ID. v Recover the data set to the current time, or to a prior time, using the RECOVER utility. v Replace the contents of the data set, using LOAD REPLACE. If the message occurs during normal operation, use any of the preceding methods in addition to one of the following action: v Accept the down-level data set by changing its level ID. The REPAIR utility contains a statement for that purpose. Run a utility job with the statement REPAIR LEVELID. The LEVELID statement cannot be used in the same job step with any other REPAIR statement.
Important If you accept a down-level data set or disable down-level detection, your data might be inconsistent. For more information about using the utilities, see DB2 Utility Guide and Reference. You can control down-level detection. Use the LEVELID UPDATE FREQ field of panel DSNTIPL to either disable down-level detection or control how often the level ID of a page set or partition is updated. DB2 accepts any value between 0 and 32767. To disable down-level detection, specify 0 in the LEVELID UPDATE FREQ field of panel DSNTIPL. To control how often level ID updates are taken, specify a value between 1 and 32767. See Part 2 of DB2 Installation Guide for more information about choosing the frequency of level ID updates.
549
If changes were made after the image copy, DB2 puts the table space in Aux Warning status. The purpose of this status is let you know that some of your LOBs are invalid. Applications that try to retrieve the values of those LOBs will receive SQLCODE -904. Applications can still access other LOBs in the LOB table space. 2. Get a report of the invalid LOBs by running CHECK LOB on the LOB table space:
CHECK LOB TABLESPACE dbname.lobts
3. Fix the invalid LOBs, by updating the LOBs or setting them to the null value. For example, suppose you determine from the CHECK LOB utility that the row of the EMP_PHOTO_RESUME table with ROWID X'C1BDC4652940D40A81C201AA0A28' has an invalid value for column RESUME. If host variable hvlob contains the correct value for RESUME, you can use this statement to correct the value:
UPDATE DSN8810.EMP_PHOTO_RESUME SET RESUME = :hvlob WHERE EMP_ROWID = ROWID(X'C1BDC4652940D40A81C201AA0A28');
where dddddddd is a table space name. Any table spaces identified in DSNU086I messages must be recovered using one of the procedures in this section listed under Operator Action. System action: DB2 remains active. Operator action: Fix the error range. 1. Use the command STOP DATABASE to stop the failing table space.
550
Administration Guide
2. Use the command START DATABASE ACCESS (UT) to start the table space for utility-only access. 3. Start a RECOVER utility step to recover the error range by using the DB2 RECOVER (dddddddd) ERROR RANGE statement. If you receive message DSNU086I again, indicating the error range recovery cannot be performed, use the recovery procedure that follows this procedure. 4. Issue the command START DATABASE to start the table space for RO or RW access, whichever is appropriate. If the table space is recovered, you do not need to continue with the following procedure. If error range recovery fails: If the error range recovery of the table space failed because of a hardware problem, proceed as follows: 1. Use the command STOP DATABASE to stop the table space or table space partition that contains the error range. This causes all the in-storage data buffers associated with the data set to be externalized to ensure data consistency during the subsequent steps. 2. Use the INSPECT function of the IBM Device Support Facility, ICKDSF, to check for track defects and to assign alternate tracks as necessary. The physical location of the defects can be determined by analyzing the output of messages DSNB224I, DSNU086I, IOS000I, which were displayed on the system operators console at the time the error range was created. If damaged storage media is suspected, then request assistance from hardware support personnel before proceeding. Refer to Device Support Facilities User's Guide and Reference for information about using ICKDSF. 3. Use the command START DATABASE to start the table space with ACCESS(UT) or ACCESS(RW). 4. Run the utility RECOVER ERROR RANGE that, from image copies, locates, allocates, and applies the pages within the tracks affected by the error ranges.
where dddddddd is a table space name from the catalog or directory. dddddddd is the table space that failed (for example, SYSCOPY, abbreviation for SYSIBM.SYSCOPY, or SYSLGRNX, abbreviation for DSNDB01.SYSLGRNX). This message can indicate either read or write errors. You can also get a DSNB224I or DSNB225I message, which could indicate an input or output error for the catalog or directory. Any catalog or directory table spaces that are identified in DSNU086I messages must be recovered with this procedure. System action: DB2 remains active. If the DB2 directory or any catalog table is damaged, only user IDs with the RECOVERDB privilege in DSNDB06, or an authority that includes that privilege, can do the recovery. Furthermore, until the recovery takes place, only those IDs can do anything with the subsystem. If an ID without proper authorization
551
attempts to recover the catalog or directory, message DSNU060I is displayed. If the authorization tables are unavailable, message DSNT500I is displayed indicating the resource is unavailable. System programmer action: None. Operator action: Take the following steps for each table space in the DB2 catalog and directory that has failed. If there is more than one, refer to the description of RECOVER in Part 2 of DB2 Utility Guide and Reference for more information about the specific order of recovery. 1. Stop the failing table spaces. 2. Determine the name of the data set that failed. There are two ways to do this: v Check prefix.SDSNSAMP (DSNTIJIN), which contains the JCL for installing DB2. Find the fully qualified name of the data set that failed by searching for the name of the table space that failed (the one identified in the message as SPACE = dddddddd). v Construct the data set name by performing one of the following actions: If the table space is in the DB2 catalog, the data set name format is:
DSNC810.DSNDBC.DSNDB06.dddddddd.I0001.A001
where dddddddd is the name of the table space that failed. If the table space is in the DB2 directory, the data set name format is:
DSNC810.DSNDBC.DSNDB01.dddddddd.I0001.A001
where dddddddd is the name of the table space that failed. If you do not use the default (IBM-supplied) formats, the formats for data set names can be different. 3. Use access method services DELETE to delete the data set, specifying the fully qualified data set name. 4. After the data set has been deleted, use access method services DEFINE to redefine the same data set, again specifying the same fully qualified data set name. Use the JCL for installing DB2 to determine the appropriate parameters. Important: The REUSE parameter must be coded in the DEFINE statements. 5. Issue the command START DATABASE ACCESS(UT), naming the table space involved. 6. Use the RECOVER utility to recover the table space that failed. 7. Issue the command START DATABASE, specifying the table space name and RO or RW access, whichever is appropriate.
552
Administration Guide
DSNP012I - DSNPSCT0 - ERROR IN VSAM CATALOG LOCATE FUNCTION FOR data_set_name CTLGRC=50 CTLGRSN=zzzzRRRR CONNECTION-ID=xxxxxxxx, CORRELATION-ID=yyyyyyyyyyyy LUW-ID=logical-unit-of-work-id=token
For a detailed explanation of this message, see Part 2 of DB2 Messages. VSAM can also issue the following message:
IDC3009I VSAM CATALOG RETURN CODE IS 50, REASON CODE IS IGGOCLaa - yy
In this VSAM message, yy is 28, 30, or 32 for an out-of-space condition. Any other values for yy indicate a damaged VVDS. System action: Your program is terminated abnormally and one or more messages are issued. System programmer action: None. Operator action: For information about recovering the VVDS, consult z/OS DFSMS: Managing Catalogs. The procedures given in these books describe three basic recovery scenarios. First determine which scenario exists for the specific VVDS in error. Then, before beginning the appropriate procedure, take the following steps: 1. Determine the names of all table spaces residing on the same volume as the VVDS. To determine the table space names, look at the VTOC entries list for that volume, which indicates the names of all the data sets on that volume. For information about how to determine the table space name from the data set name, refer to Part 2, Designing a database: advanced topics, on page 23. 2. Use the DB2 COPY utility to take image copies of all table spaces of the volume. Taking image copies minimizes reliance on the DB2 recovery log and can speed up the processing of the DB2 RECOVER utility (to be mentioned in a subsequent step). If the COPY utility cannot be used, continue with this procedure. Be aware that processing time increases because more information is obtained from the DB2 recovery log. 3. Use the command STOP DATABASE for all the table spaces that reside on the volume, or use the command STOP DB2 to stop the entire DB2 subsystem if an unusually large number or critical set of table spaces are involved. 4. If possible, use access method services to export all non-DB2 data sets residing on that volume. For more information, see DFSMS/MVS: Access Method Services for the Integrated Catalog and z/OS DFSMS: Managing Catalogs. 5. To recover all non-DB2 data sets on the volume, see DFSMS/MVS: Access Method Services for the Integrated Catalog and z/OS DFSMS: Managing Catalogs. 6. Use access method services DELETE and DEFINE commands to delete and redefine the data sets for all user-defined table spaces and DB2-defined data sets when the physical data set has been destroyed. DB2 automatically deletes and redefines all other STOGROUP defined table spaces. You do not need to do this for those table spaces that are STOGROUP defined; DB2 takes care of them automatically.
553
7. Issue the DB2 START DATABASE command to restart all the table spaces stopped in step 3 on page 553. If the entire DB2 subsystem was stopped, issue the START DB2 command. 8. Use the DB2 RECOVER utility to recover any table spaces and indexes. For information about recovering table spaces, refer to Chapter 21, Backing up and recovering databases, on page 475.
2. Look ahead warning A look ahead warning occurs when there is enough space for a few inserts and updates, but the index space or table space is almost full. On an insert or update at the end of a page set, DB2 determines whether the data set has enough available space. DB2 uses the following values in this space calculation: v The primary space quantity from the integrated catalog facility (ICF) catalog v The secondary space quantity from the ICF catalog v The allocation unit size If enough space does not exist, DB2 tries to extend the data set. If the extend request fails, then DB2 issues the following message:
DSNP001I - DSNPmmmm - data-set-name IS WITHIN nK BYTES OF AVAILABLE SPACE. RC=rrrrrrrr CONNECTION-ID=xxxxxxxx, CORRELATION-ID=yyyyyyyyyyyy LUW-ID=logical-unit-of-work-id=token
System action: For a demand request failure during restart, the object supported by the data set (an index space or a table space) is stopped with deferred restart pending. Otherwise, the state of the object remains unchanged. Programs receive a -904 SQL return code (SQLSTATE '57011'). System programmer action: None. Operator action: The appropriate choice of action depends on particular circumstances. The following topics are described in this section; the text following the list describes how to choose which action to take: v Procedure 1. Extend a data set on page 555 v Procedure 2. Enlarge a fully extended data set (user-managed) on page 555 v Procedure 3. Enlarge a fully extended data set (in a DB2 storage group) on page 556 v Procedure 4. Add a data set on page 557 v Procedure 5. Redefine a partition (index-based partitioning) on page 557
554
Administration Guide
| | |
v Procedure 6. Redefine a partition (table-based partitioning) on page 557 v Procedure 7. Enlarge a fully extended data set for the work file database on page 557 If the database qualifier of the data set name is DSNDB07, then the condition is on your work file database. Use Procedure 7. Enlarge a fully extended data set for the work file database on page 557. In all other cases, if the data set has not reached its maximum DB2 size, then you can enlarge it. (The maximum size is 2 GB for a data set of a simple space, and 1, 2, or 4 GB for a data set containing a partition. Large partitioned table spaces and indexes on large partitioned table spaces have a maximum data set size of 4 GB.) v If the data set has not reached the maximum number of VSAM extents, use Procedure 1. Extend a data set. v If the data set has reached the maximum number of VSAM extents, use either Procedure 2. Enlarge a fully extended data set (user-managed) or Procedure 3. Enlarge a fully extended data set (in a DB2 storage group) on page 556, depending on whether the data set is user-managed or DB2-managed. User-managed data sets include essential data sets such as the catalog and the directory. If the data set has reached its maximum DB2 size, then your action depends on the type of object it supports. v If the object is a simple space, add a data set, using Procedure 4. Add a data set on page 557. v If the object is partitioned, each partition is restricted to a single data set. You must redefine the partitions; use either Procedure 5. Redefine a partition (index-based partitioning) on page 557 or Procedure 6. Redefine a partition (table-based partitioning) on page 557. Procedure 1. Extend a data set: If the data set is user-defined, provide more VSAM space. You can add volumes with the access method services command ALTER ADDVOLUMES or make room on the current volume. If the data set is defined in a DB2 storage group, add more volumes to the storage group by using the SQL ALTER STOGROUP statement. For more information about DB2 data set extension, refer to Extending DB2-managed data sets on page 31. Procedure 2. Enlarge a fully extended data set (user-managed): 1. To allow for recovery in case of failure during this procedure, be sure that you have a recent full image copy (for table spaces or if you copy your indexes). Use the DSNUM option to identify the data set for table spaces or partitioning indexes. 2. Issue the command STOP DATABASE SPACENAM for the last data set of the object supported. 3. Depending on which version of z/OS you have and how many extents you need, complete one of the following tasks: You do not have z/OS V1.7 or later: (You can use this option with any version of z/OS, but you are limited to 255 extents.)
| |
| | |
# # # # #
555
# # # | | | | | # # # # # # # # #
Delete the last data set by using access method services. Then redefine it and enlarge it as necessary, by giving new values for PRIQTY and SECQTY. The object must be user-defined and a linear data set, and should not have reached the maximum number of 32 data sets for a nonpartitioned table space (or 254 data sets for LOB table spaces). For a partitioned table space, a partitioning index, or a nonpartitioning index on a partitioned table space, the maximum is 4096 data sets. You have z/OS V1.7 or later: If the DB2 subsystem is running on z/OS V1.7 or later, and the data set will not be shared with any z/OS systems at an earlier level, convert the data set to SMS-managed with the Extent Constraint Removal option set to YES in the SMS data class. If you do this, the maximum number of extents is 7257. If you have z/OS V1.7 and an older version coexisting in a data-sharing environment, a DB2 data set extended over 255 extents in V1.7 is not accessible by the lower release. 4. Issue the command START DATABASE ACCESS (UT) to start the object for utility-only access. 5. To recover the data set that was redefined, use RECOVER on the table space or index, and identify the data set by the DSNUM option (specify this DSNUM option for table spaces or partitioning indexes only). RECOVER lets you specify a single data set number for a table space. Thus, only the last data set (the one that needs extension) must be redefined and recovered. This can be better than using REORG if the table space is very large and contains multiple data sets, and if the extension must be done quickly. If you do not copy your indexes, then use the REBUILD INDEX utility. 6. Issue the command START DATABASE to start the object for either RO or RW access, whichever is appropriate. Procedure 3. Enlarge a fully extended data set (in a DB2 storage group):
# # # # # # # # # # # # # # # # # #
1. Depending on which version of z/OS you have and how many extents you need, complete one of the following tasks: You do not have z/OS V1.7 or later: (You can use this option with any version of z/OS, but you are limited to 255 extents.) Use ALTER TABLESPACE or ALTER INDEX with a USING clause. (You do not have to stop the table space before you use ALTER TABLESPACE.) You can give new values for PRIQTY and SECQTY in the storage group. You have z/OS V1.7 or later: If the DB2 subsystem is running on z/OS V1.7 or later, and the data set will not be shared with any z/OS systems at an earlier level, convert the data set to SMS-managed with the Extent Constraint Removal option set to YES in the SMS data class. If you do this, the maximum number of extents is 7257. If you have z/OS V1.7 and an older version coexisting in a data-sharing environment, a DB2 data set extended over 255 extents in V1.7 is not accessible by the lower release. 2. Use one of the following procedures. Keep in mind that no movement of data occurs until this step is completed.
556
Administration Guide
v For indexes: If you have taken full image copies of the index, run the RECOVER INDEX utility. Otherwise, run the REBUILD INDEX utility. v For table spaces other than LOB table space: Run one of the following utilities on the table space: REORG, RECOVER, or LOAD REPLACE. v For LOB table spaces defined with LOG YES: Run the RECOVER utility on the table space. v For LOB table spaces defined with LOG NO, follow these steps: a. Start the table space in read-only (RO) mode to ensure that no updates are made during this process. b. Make an image copy of the table space. c. Run the RECOVER utility on the table space. d. Start the table space in read-write (RW) mode. Procedure 4. Add a data set: If the object supported is user-defined, use the access method services to define another data set. The name of the new data set must continue the sequence begun by the names of the existing data sets that support the object. The last four characters of each name are a relative data set number: If the last name ended with A001, the next must end with A002, and so on. Also, be sure to add either I or J in the name of the data set. If the object is defined in a DB2 storage group, DB2 automatically tries to create an additional data set. If that fails, access method services messages are sent to an operator indicating the cause of the problem. Correcting that problem allows DB2 to get the additional space. | Procedure 5. Redefine a partition (index-controlled partitioning): 1. Use ALTER INDEX ALTER PARTITION to alter the key range values of the partitioning index. 2. Use REORG with inline statistics on the partitions that are affected by the change in key range. 3. Use RUNSTATS on the nonpartitioned indexes. 4. Rebind the dependent packages and plans. | | | | | | Procedure 6. Redefine a partition (table-controlled partitioning): 1. Use ALTER TABLE ALTER PARTITION to alter the partition boundaries. 2. Use REORG with inline statistics on the partitions that are affected by the change in partition boundaries. 3. Use RUNSTATS on the indexes. 4. Rebind the dependent packages and plans. Procedure 7. Enlarge a fully extended data set for the work file database: Use one of the following methods to add space for extension to the DB2 storage group: v Use SQL to create more table spaces in database DSNDB07. Or, v Execute these steps:
557
1. Use the command STOP DATABASE(DSNDB07) to ensure that no users are accessing the database. 2. Use SQL to alter the storage group, adding volumes as necessary. 3. Use the command START DATABASE(DSNDB07) to allow access to the database.
System action: None. The table space is still available; however, it is not available to the COPY, REORG, and QUIESCE utilities, or to SQL select, insert, delete, or update operations that involve tables in the table space. System programmer action: None. Operator action: 1. Use the START DATABASE ACCESS (UT) command to start the table space for utility-only access. 2. Run the CHECK DATA utility on the table space. Take the following recommendations into consideration: v If you do not believe that violations exist, specify DELETE NO. If, indeed, violations do not exist, this resets the check-pending status; however, if violations do exist, the status is not going to be reset. v If you believe that violations exist, specify the DELETE YES option and an appropriate exception table (see Part 2 of DB2 Utility Guide and Reference for the syntax of this utility). This deletes all rows in violation, copies them to an exception table, and resets the check-pending status. v If the check-pending status was set during execution of the LOAD utility, specify the SCOPE PENDING option. This checks only those rows added to the table space by LOAD, rather than every row in the table space. 3. Correct the rows in the exception table, if necessary, and use the SQL INSERT statement to insert them into the original table. 4. Issue the command START DATABASE to start the table space for RO or RW access, whichever is appropriate. The table space is no longer in check-pending status and is available for use. If you use the ACCESS (FORCE) option of this command, the check-pending status is reset. However, this is not recommended because it does not correct violations of referential constraints.
558
Administration Guide
Indefinite wait condition recovery on page 562 Security failure recovery for database access threads on page 562
559
Problem 1
A failure occurs during an attempt to access the DB2 CDB (after DDF is started). Symptom: A DSNL700I message, indicating that a resource unavailable condition exists, is sent to the console. Other messages describing the cause of the failure are also sent to the console. System action: The distributed data facility (DDF) does not terminate if it has already started and an individual CDB table becomes unavailable. Depending on the severity of the failure, threads will either receive a -904 SQL return code (SQLSTATE '57011') with resource type 1004 (CDB), or continue using VTAM defaults. Only the threads that access locations that have not had any prior threads will receive a -904 SQL return code. DB2 and DDF remain up. Operator action: Correct the error based on the messages received, then stop and restart DDF.
Problem 2
The DB2 CDB is not defined correctly. This occurs when DDF is started and the DB2 catalog is accessed to verify the CDB definitions. Symptom: A DSNL701I, 702I, 703I, 704I, or 705I message is issued to identify the problem. Other messages describing the cause of the failure are also sent to the console. System action: DDF fails to start. DB2 continues to run. Operator action: Correct the error based on the messages received and restart DDF.
560
Administration Guide
v Errors occurring during commit, roll back, and deallocate within the DDF function do not normally cause DB2 to abend. Conversations are deallocated and the database access thread is terminated. The allied thread sees conversation failures. System programmer action: All diagnostic information related to the failure must be collected at the serving site. For a DB2 DBAT, a dump is produced at the server. Operator action: Communicate with the operator at the other site to take the appropriate corrective action, regarding the messages appearing on consoles at both the requesting and responding sites. Operators at both sites should gather the appropriate diagnostic information and give it to the programmer for diagnosis.
561
cannot be negotiated. Message DSNL500I is issued only once for all the SQL conversations that fail because of a remote LU failure. Message DSNL502I is issued for system conversations that are active to the remote LU at the time of the failure. This message contains the VTAM diagnostic information about the cause of the failure. System action: Any application communications with a failed LU receives a message indicating a resource unavailable condition. The application programs receive SQL return code -904 (SQLSTATE '57011') for DB2 private protocol access and SQL return code -30080 for DRDA access. Any attempt to establish communication with such an LU fails. Operator action: Communicate with the other sites involved regarding the unavailable resource condition, and request that appropriate corrective action be taken. If a DSNL502 message is received, the operator should activate the remote LU.
562
Administration Guide
why the user failed, to be returned to the application. If the server is a DB2 database access thread, message DSNL030I is issued to describe what caused the user to be denied access into DB2 through DDF. No message is issued for TCP/IP connections. System action: If the server is a DB2 subsystem, message DSNL030I is issued. Otherwise, the system programmer needs to refer to the documentation of the server. If the application uses DB2 private protocol access, it receives SQLCODE -904 (SQLSTATE '57011') with a reason code 00D3103D, indicating that a resource is unavailable. For DRDA access, SQLCODE 30082 is returned. See DB2 Codes for more information about those messages. System programmer action: Refer to the description of 00D3103D in Part 3 of DB2 Codes. Operator action: If it is a DB2 database access thread, the operator should provide the DSNL030I message to the system programmer. If it is not a DB2 server, the operator needs to work with the operator or programmer at the server to get diagnostic information needed by the system programmer.
563
Data sharing Remove old information from the coupling facility, if you have information in your coupling facility from practice startups. If you do not have old information in the coupling facility, you can omit this step. a. Enter the following z/OS command to display the structures for this data sharing group:
D XCF,STRUCTURE,STRNAME=grpname*
b. For group buffer pools and the lock structure, enter the following command to force the connections off those structures:
SETXCF FORCE,CONNECTION,STRNAME=strname,CONNAME=ALL
Connections for the SCA are not held at termination, so you do not need to force off any SCA connections. c. Delete all the DB2 coupling facility structures by using the following command for each structure:
SETXCF FORCE,STRUCTURE,STRNAME=strname
This step is necessary to remove old information that exists in the coupling facility from your practice startup when you installed the group. 2. If an integrated catalog facility catalog does not already exist, run job DSNTIJCA to create a user catalog. 3. Use the access method services IMPORT command to import the integrated catalog facility catalog. 4. Restore DB2 libraries, such as DB2 reslibs, SMP libraries, user program libraries, user DBRM libraries, CLISTs, SDSNSAMP, or where the installation jobs are, JCL for user-defined table spaces, and so on. 5. Use IDCAMS DELETE NOSCRATCH to delete all catalog and user objects. (Because step 3 imports a user ICF catalog, the catalog reflects data sets that do not exist on disk.) Obtain a copy of installation job DSNTIJIN. This job creates DB2 VSAM and non-VSAM data sets. Change the volume serial numbers in the job to volume serial numbers that exist at the recovery site. Comment out the steps that create DB2 non-VSAM data sets, if these data sets already exist. Run DSNTIJIN. However, do not run DSNTIJID.
# # # # #
Data sharing Obtain a copy of the installation job DSNTIJIN for the first data sharing member to be migrated. Run DSNTIJIN on the first data sharing member. For subsequent members of the data sharing group, run the DSNTIJIN that defines the BSDS and logs. 6. Recover the BSDS: a. Use the access method services REPRO command to restore the contents of one BSDS data set (allocated in the previous step). The most recent BSDS image can be found in the last file (archive log with the highest number) on the latest archive log tape.
564
Administration Guide
Data sharing The BSDS data sets on each data sharing member need to be restored. b. To determine the RBA range for this archive log, use the print log map utility (DSNJU004) to list the current BSDS contents. Find the most recent archive log in the BSDS listing and add 1 to its ENDRBA value. Use this as the STARTRBA. Find the active log in the BSDS listing that starts with this RBA and use its ENDRBA as the ENDRBA.
Data Sharing The LRSNs are also required. c. Delete the oldest archive log from the BSDS. d. Use the change log inventory utility (DSNJU003) to register this latest archive log tape data set in the archive log inventory of the BSDS just restored. This is necessary because the BSDS image on an archive log tape does not reflect the archive log data set residing on that tape.
Data sharing Running DSNJU003 is critical for data sharing groups. Group buffer pool checkpoint information is stored in the BSDS and needs to be included from the most recent archive log. After these archive logs are registered, use the print log map utility (DSNJU004) with the GROUP option to list the contents of all the BSDSs. You receive output that includes the start and end LRSN and RBA values for the latest active log data sets (shown as NOTREUSABLE). If you did not save the values from the DSNJ003I message, you can get those values by running DSNJU004, which creates output similar the output that is shown in Figure 58 and Figure 59 on page 566. Figure 58 shows a partial example of output from the DSNJU004 utility. This output contains the BSDS information for the archive log member DB1G.
ACTIVE LOG COPY 1 DATA START RBA/LRSN/TIME -------------------000001C20000 ADFA0FB26C6D 1996.361 23:37:48.4 000001C68000 ADFA208AA36C 1996.362 00:53:10.1 000001D50000 AE3C45273A78 1997.048 15:28:23.5 SETS END RBA/LRSN/TIME -------------------000001C67FFF ADFA208AA36B 1996.362 00:53:10.1 000001D4FFFF AE3C45273A77 1997.048 15:28:23.5 0000020D3FFF ............ ........ .......... DATE LTIME DATA SET INFORMATION -------- ----- -------------------1996.358 17:25 DSN=DSNDB0G.DB1G.LOGCOPY1.DS03 STATUS=TRUNCATED, REUSABLE 1996.358 1996.358 17:25 17:25 DSN=DSNDB0G.DB1G.LOGCOPY1.DS01 STATUS=TRUNCATED, NOTREUSABLE DSN=DSNDB0G.DB1G.LOGCOPY1.DS02 STATUS=NOTREUSABLE
Figure 59 on page 566 shows a partial example of output from the DSNJU004 utility. This output contains the BSDS information for the
Chapter 22. Recovery scenarios
565
Data sharing Do all other preparatory activities as you would for a single system. Do these activities for each member of the data sharing group. e. Use the change log inventory utility to adjust the active logs: 1) Use the DELETE option of the change log inventory utility (DSNJU003) to delete all active logs in the BSDS. Use the BSDS listing produced in step 6d on page 565 to determine the active log data set names. 2) Use the NEWLOG statement of the change log inventory utility (DSNJU003) to add the active log data sets to the BSDS. Do not specify a STARTRBA or ENDRBA value in the NEWLOG statement. This indicates to DB2 that the new active logs are empty. f. If you are using the DB2 distributed data facility, run the change log inventory utility with the DDF statement to update the LOCATION and the LUNAME values in the BSDS. g. Use the print log map utility (DSNJU004) to list the new BSDS contents and ensure that the BSDS correctly reflects the active and archive log data set inventories. In particular, ensure that: v All active logs show a status of NEW and REUSABLE v The archive log inventory is complete and correct (for example, the start and end RBAs should be correct). h. If you are using dual BSDSs, make a copy of the newly restored BSDS data set to the second BSDS dataset. 7. Optionally, you can restore archive logs to disk. Archive logs are typically stored on tape, but restoring them to disk could speed later steps. If you elect this option, and the archive log data sets are not cataloged in the primary integrated catalog facility catalog, use the change log inventory utility to update the BSDS. If the archive logs are listed as cataloged in the BSDS, DB2 allocates them using the integrated catalog and not the unit or volser specified in the BSDS. If you are using dual BSDSs, remember to update both copies. 8. Use the DSN1LOGP utility to determine which transactions were in process at the end of the last archive log. Use the following job control language where yyyyyyyyyyyy is the STARTRBA of the last complete checkpoint within the RBA range on the last archive log from the previous print log map:
//SAMP EXEC PGM=DSN1LOGP //SYSPRINT DD SYSOUT=* //SYSSUMRY DD SYSOUT=* //ARCHIVE DD DSN=last-archive,DISP=(OLD,KEEP),UNIT=TAPE,
566
Administration Guide
DSN1LOGP gives a report. For sample output and information about how to read it, see Part 3 of DB2 Utility Guide and Reference. Note whether any utilities were executing at the end of the last archive log. You will have to determine the appropriate recovery action to take on each table space involved in a utility job. If DSN1LOGP showed that utilities are inflight (PLAN=DSNUTIL), you need SYSUTILX to identify the utility status and determine the recovery approach. See What to do about utilities in progress on page 571. 9. Modify DSNZPxxx parameters: a. Run the DSNTINST CLIST in UPDATE mode. See Part 2 of DB2 Installation Guide. b. To defer processing of all databases, select Databases to Start Automatically from panel DSNTIPB. You are presented with panel DSNTIPS. Type DEFER in the first field, ALL in the second, and press Enter. You are returned to DSNTIPB. c. To specify where you are recovering, select Operator Functions from panel DSNTIPB. You are presented with panel DSNTIPO. Type RECOVERYSITE in the SITE TYPE field. Press Enter to continue. d. To optionally specify which archive log to use, select Operator Functions from panel DSNTIPB. You are presented with panel DSNTIPO. Type YES in the READ ARCHIVE COPY2 field if you are using dual archive logging and want to use the second copy of the archive logs. Press Enter to continue. e. Reassemble DSNZPxxx using job DSNTIJUZ (produced by the CLIST started in the first step). At this point, you have the log, but the table spaces have not been recovered. With DEFER ALL, DB2 assumes that the table spaces are unavailable, but does the necessary processing to the log. This step also handles the units of recovery in process. 10. Use the change log inventory utility to create a conditional restart control record. In most cases, you can use this form of the CRESTART statement:
CRESTART CREATE,ENDRBA=nnnnnnnnn000,FORWARD=YES, BACKOUT=YES
where nnnnnnnnn000 equals a value one more than the ENDRBA of the latest archive log.
567
Data sharing If you are recovering a data sharing group, and your logs are not at a single point of consistency, use this form of the CRESTART statement:
CRESTART CREATE,ENDLRSN=nnnnnnnnnnnn,FORWARD=YES,BACKOUT=YES
where nnnnnnnnnnnn is the LRSN of the last log record to be used during restart. Use the same LRSN for all members in a data sharing group. Determine the ENDLRSN value using one of the following methods: v Use the DSN1LOGP summary utility to obtain the ENDLRSN value. In the Summary of Completed Events section, find the lowest LRSN value listed in the DSN1213I message, for the data sharing group. Use this value for the ENDLRSN in the CRESTART statement. v Use the print log map utility (DSNJU004) to list the BSDS contents. Find the ENDLRSN of the last log record available for each active member of the data sharing group. Subtract 1 from the lowest ENDLRSN in the data sharing group. Use this value for the ENDLRSN in the CRESTART statement. (In our example in Figure 58 on page 565, that is AE3C45273A77 - 1, which is AE3C45273A76.) v If only the console logs are available, use the archive offload message, DSNJ003I to obtain the ENDLRSN. Compare the ending LRSN values for all members archive logs. Subtract 1 from the lowest LRSN in the data sharing group. Use this value for the ENDLRSN in the CRESTART statement. (In our example in Figure 58 on page 565, that is AE3C45273A77 - 1, which is AE3C45273A76.) DB2 discards any log information in the bootstrap data set and the active logs with an RBA greater than or equal to nnnnnnnnn000 or an LRSN greater than nnnnnnnnnnnn as listed in the preceding CRESTART statements. Use the print log map utility to verify that the conditional restart control record that you created in the previous step is active. 11. Enter the command START DB2 ACCESS(MAINT). You must enter this command if real-time statistics are active and enabled; otherwise, errors or abends could occur during DB2 restart processing and recovery processing (for example, GRECP recovery, LPL recovery, or the RECOVER utility).
# # # #
568
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Data Sharing If a discrepancy exists among the print log map reports as to the number of members in the group, record the one that shows the highest number of members. (This is an unlikely occurrence.) Start this DB2 subsystem first using ACCESS(MAINT). DB2 prompts you to start each additional DB2 subsystem in the group. After all additional members are successfully restarted, and if you are going to run single-system data sharing at the recovery site, stop all DB2 subsystems but one by using the STOP DB2 command with MODE(QUIESCE). If you planned to use the light mode when starting the DB2 group, add the LIGHT parameter to the START command. Start the members that run in LIGHT(NO) mode first, followed by the light mode members. See Preparing for disaster recovery on page 487 for details on using restart light at a recovery site. Even though DB2 marks all table spaces for deferred restart, log records are written so that in-abort and inflight units of recovery are backed out. In-commit units of recovery are completed, but no additional log records are written at restart to cause this. This happens when the original redo log records are applied by the RECOVER utility. At the primary site, DB2 probably committed or aborted the inflight units of recovery, but you have no way of knowing. During restart, DB2 accesses two table spaces that result in DSNT501I, DSNT500I, and DSNL700I resource unavailable messages, regardless of DEFER status. The messages are normal and expected, and you can ignore them. The following return codes can accompany the message. Other codes are also possible. 00C90081 This return code occurs if there is activity against the object during restart as a result of a unit of recovery or pending writes. In this case the status shown as a result of DISPLAY is STOP,DEFER. 00C90094 Because the table space is currently only a defined VSAM data set, it is in an unexpected state to DB2. 00C900A9 This codes indicates that an attempt was made to allocate a deferred resource. 12. Resolve the indoubt units of recovery. The RECOVER utility, which you will soon invoke, will fail on any table space that has indoubt units of recovery. Because of this, you must resolve them first. Determine the proper action to take (commit or abort) for each unit of recovery. To resolve indoubt units of recovery, see Resolving indoubt units of recovery on page 463. From an install SYSADM authorization ID, enter the RECOVER INDOUBT command for all affected transactions. 13. To recover the catalog and directory, follow these instructions: The RECOVER function includes: RECOVER TABLESPACE, RECOVER INDEX, or REBUILD INDEX. If you have an image copy of an index, use
Chapter 22. Recovery scenarios
569
RECOVER INDEX. If you do not have an image copy of an index, use REBUILD INDEX to reconstruct the index from the recovered table space. a. Recover DSNDB01.SYSUTILX. This must be a separate job step. b. Recover all indexes on SYSUTILX. This must be a separate job step. c. Your recovery strategy for an object depends on whether a utility was running against it at the time the latest archive log was created. To identify the utilities that were running, you must recover SYSUTILX. You cannot restart a utility at the recovery site that was interrupted at the disaster site. You must use the TERM command to terminate it. The TERM UTILITY command can be used on any object except DSNDB01.SYSUTILX. Determine which utilities were executing and the table spaces involved by performing the following procedure: 1) Enter the DISPLAY UTILITY(*) command and record the utility and the current phase. 2) Run the DIAGNOSE utility with the DISPLAY SYSUTIL statement. The output consists of information about each active utility, including the table space name (in most instances). It is the only way to correlate the object name with the utility. Message DSNU866I gives information about the utility, while DSNU867I gives the database and table space name in USUDBNAM and USUSPNAM respectively. d. Use the command TERM UTILITY to terminate any utilities in progress on catalog or directory table spaces. See What to do about utilities in progress on page 571 for information about how to recover catalog and directory table spaces on which utilities were running. e. Recover the rest of the catalog and directory objects starting with DBD01, in the order shown in the description of the RECOVER utility in Part 2 of DB2 Utility Guide and Reference. Use any method desired to verify the integrity of the DB2 catalog and directory. Migration step 1 in Chapter 1 of DB2 Installation Guide lists one option for verification. The catalog queries in member DSNTESQ of data set DSN810.SDSNSAMP can be used after the work file database is defined and initialized. Define and initialize the work file database. a. Define temporary work files. Use installation job DSNTIJTM as a model. b. Issue the command START DATABASE(work-file-database) to start the work file database. If you use data definition control support, recover the objects in the data definition control support database. If you use the resource limit facility, recover the objects in the resource limit control facility database. Modify DSNZPxxx to restart all databases: a. Run the DSNTINST CLIST in UPDATE mode. See Part 2 of DB2 Installation Guide. b. From panel DSNTIPB select Databases to Start Automatically. You are presented with panel DSNTIPS. Type RESTART in the first field, ALL in the second and press Enter. You are returned to DSNTIPB. c. Reassemble DSNZPxxx using job DSNTIJUZ (produced by the CLIST started in the first step).
14.
15.
19. Stop and start DB2. 20. Make a full image copy of the catalog and directory.
570
Administration Guide
21. Recover user table spaces and index spaces. See What to do about utilities in progress for information about how to recover table spaces or index spaces on which utilities were running. You cannot restart a utility at the recovery site that was interrupted at the disaster site. Use the TERM command to terminate any utilities running against user table spaces or index spaces. a. To determine which, if any, of your table spaces or index spaces are user-managed, perform the following queries for table spaces and index spaces. Table spaces:
SELECT * FROM SYSIBM.SYSTABLEPART WHERE STORTYPE='E';
Index spaces:
SELECT * FROM SYSIBM.SYSINDEXPART WHERE STORTYPE='E';
To allocate user-managed table spaces or index spaces, use the access method services DEFINE CLUSTER command. To find the correct IPREFIX for the DEFINE CLUSTER command, perform the following queries for table spaces and index spaces. Table spaces:
SELECT DBNAME, TSNAME, PARTITION, IPREFIX FROM SYSIBM.SYSTABLEPART WHERE DBNAME=dbname AND TSNAME=tsname ORDER BY PARTITION;
Index spaces:
SELECT IXNAME, PARTITION, IPREFIX FROM SYSIBM.SYSINDEXPART WHERE IXCREATOR=ixcreator AND IXNAME=ixname ORDER BY PARTITION;
Now you can perform the DEFINE CLUSTER command with the correct IPREFIX (I or J) in the data set name:
catname.DSNDBC.dbname.spname.y0001.A00x
where y can be either I or J, x is C (for VSAM clusters) or D (for VSAM data components), and spname is either the table space or index space name. Access method services commands are described in detail in z/OS DFSMS Access Method Services for Catalogs. b. If your user table spaces or index spaces are STOGROUP-defined, and if the volume serial numbers at the recovery site are different from those at the local site, use ALTER STOGROUP to change them in the DB2 catalog. c. Recover all user table spaces and index spaces from the appropriate image copies. If you do not copy your indexes, use the REBUILD INDEX utility to reconstruct the indexes. d. Start all user table spaces and index spaces for read or write processing by issuing the command START DATABASE with the ACCESS(RW) option. e. Resolve any remaining check pending states that would prevent COPY execution. f. Run select queries with known results. 22. Make full image copies of all table spaces and indexes with the COPY YES attribute. 23. Finally, compensate for lost work since the last archive was created by rerunning online transactions and batch jobs. What to do about utilities in progress: If any utility jobs were running after the last time that the log was offloaded before the disaster, you might need to take
571
some additional steps. After restarting DB2, the following utilities only need to be terminated with the TERM UTILITY command: v CHECK INDEX v MERGECOPY v MODIFY v QUIESCE v RECOVER v RUNSTATS v STOSPACE It is preferable to allow the RECOVER utility to reset pending states. However, it is occasionally necessary to use the REPAIR utility to reset them. Do not start the table space with ACCESS(FORCE) because FORCE resets any page set exception conditions described in Database page set control records on page 1120. For the following utility jobs, perform the actions indicated: CHECK DATA Terminate the utility and run it again after recovery is complete. COPY After you enter the TERM command, DB2 places a record in the SYSCOPY catalog table indicating that the COPY utility was terminated. This makes it necessary for you to make a full image copy. When you copy your environment at the completion of the disaster recovery scenario, you fulfill that requirement. LOAD Find the options you specified in Table 100, and perform the specified actions.
Table 100. Actions when LOAD is interrupted LOAD options specified LOG YES What to do If the RELOAD phase completed, recover to the current time. Recover the indexes. If the RELOAD phase did not complete, recover to a prior point in time. The SYSCOPY record inserted at the beginning of the RELOAD phase contains the RBA or LRSN. LOG NO and copy-spec If the RELOAD phase completed, the table space is complete after you recover it to the current time. Recover the indexes. If the RELOAD phase did not complete, then recover the table space to a prior point in time. Recover the indexes. LOG NO, copy-spec, and SORTKEYS integer1 If the BUILD or SORTBLD phase completed, recover to the current time, and recover the indexes. If the BUILD or SORTBLD phase did not complete, recover to a prior point in time. Recover the indexes. LOG NO Recover the table space to a prior point in time. You can use TOCOPY to do this.
Note: 1You must specify a value that is greater than zero for integer. If you specify zero for integer the SORTKEYS option does not apply. To avoid extra loss of data in a future disaster situation, run QUIESCE on table spaces before invoking LOAD. This enables you to recover a table space using TOLOGPOINT instead of TOCOPY.
572
Administration Guide
REORG For a user table space, find the options you specified in Table 101, and perform the specified actions.
Table 101. Actions when REORG is interrupted REORG options specified LOG YES What to do If the RELOAD phase completed, recover to the current time. Recover the indexes. If the RELOAD phase did not complete, recover to the current time to restore the table space to the point before REORG began. Recover the indexes. LOG NO If the build or SORTBLD phase completed, recover to the current time, and recover the indexes. If the build or SORTBLD phase did not complete, recover to the current time to restore the table space to the point before REORG began. Recover the indexes. SHRLEVEL CHANGE If the SWITCH phase completed, terminate the utility. Recover the table space to the current time. Recover the indexes. If the SWITCH phase did not complete, recover the table space to the current time. Recover the indexes. SHRLEVEL REFERENCE Same as for SHRLEVEL CHANGE.
For a catalog or directory table space, follow these instructions: For those table spaces that were using online REORG, find the options you specified in Table 101, and perform the specified actions. If you have no image copies from immediately before REORG failed, use this procedure: 1. From your DISPLAY UTILITY and DIAGNOSE output, determine what phase REORG was in and which table space it was reorganizing when the disaster occurred. 2. Run RECOVER on the catalog and directory in the order shown in Part 2 of DB2 Utility Guide and Reference. Recover all table spaces to the current time, except the table space that was being reorganized. If the RELOAD phase of the REORG on that table space had not completed when the disaster occurred, recover the table space to the current time. Because REORG does not generate any log records prior to the RELOAD phase for catalog and directory objects, the RECOVER to current restores the data to the state it was in before the REORG. If the RELOAD phase completed, perform the following actions: a. Run DSN1LOGP against the archive log data sets from the disaster site. b. Find the begin-UR log record for the REORG that failed in the DSN1LOGP output. c. Run RECOVER with the TOLOGPOINT option on the table space that was being reorganized. Use the URID of the begin-UR record as the TOLOGPOINT value. 3. Recover or rebuild all indexes.
573
If you have image copies from immediately before REORG failed, run RECOVER with the option TOCOPY to recover the catalog and directory, in the order shown in Part 2 of DB2 Utility Guide and Reference. Recommendation: Make full image copies of the catalog and directory before you run REORG on them.
| | |
574
Administration Guide
v The START DATABASE command is not allowed when LPL or GRECP status exists for the object of the command. It is not necessary to use START DATABASE to clear LPL or GRECP conditions, because you are going to be running RECOVERY jobs that clear the conditions. v The START DATABASE command with ACCESS(FORCE) is not allowed. v Down-level detection is disabled. v Log archiving is disabled. | v Real-time statistics are disabled.
575
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
1. While your primary site continues its usual workload, send a copy of the primary sites active log, archive logs, and BSDS to the tracker site. Send full image copies for the following objects: v Table spaces or partitions that are reorganized, loaded, or repaired with the LOG NO option after the latest recovery cycle. v Objects that, after the latest recovery cycle, have been recovered to a point in time Recommendation: If you are taking incremental image copies, run the MERGECOPY utility at the primary site before sending the copy to the tracker site. 2. At the tracker site, restore the BSDS that was received from the primary site. v Locate the BSDS in the latest archive log that is now at the tracker site. v Use the change log inventory utility (DSNJU003) to register this archive log in the archive log inventory of the new BSDS. v Use the change log inventory utility (DSNJU003) to register the primary sites active log in the new BSDS. For more details about restoring the BSDS, see step 6 of Remote site recovery from a disaster at the local site on page 563. 3. Use the change log inventory utility (DSNJU003) with the following CRESTART control statement:
CRESTART CREATE,ENDRBA=nnnnnnnnn000,FORWARD=NO,BACKOUT=NO
In this control statement, nnnnnnnnn000 equals the RBA at which the latest archive log record ends plus one. You must not specify the RBA at which the an archive log begins because you cannot cold start or skip logs in tracker mode.
Data sharing If you are recovering a data sharing group, you must use the following CRESTART control statement on all members of the data sharing group. The ENDLRSN value must be the same for all members.
CRESTART CREATE,ENDLRSN=nnnnnnnnnnnn,FORWARD=NO,BACKOUT=NO
In this control statement, nnnnnnnnnnnn is the lowest LRSN of all the members to be read during restart. You must specify one of the following values for the ENDLRSN: v If you receive the ENDLRSN from the output of the print log map utility (DSNJU004) or from the console logs using message DSNJ003I, you must use ENDLRSN-1 as the input to the conditional restart. v If you receive the ENDLRSN from the output of the DSN1LOGP utility (DSN1213I message), you can use the value that is displayed. The ENDLRSN or ENDRBA value indicates the end log point for data recovery and for truncating the archive log. With ENDLRSN, the missing log records between the lowest and highest ENDLRSN values for all the members are applied during the next recovery cycle. 4. If the tracker site is a data sharing group, delete all DB2 coupling facility structures before restarting the tracker members. 5. If you used DSN1COPY to create a copy of SYSUTILX during the last tracker cycle, restore this copy with DSN1COPY.
576
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | |
6. At the tracker site, restart DB2 to begin a tracker site recovery cycle.
Data sharing For data sharing, restart every member of the data sharing group. 7. At the tracker site, run the RESTORE SYSTEM utility with the LOGONLY option to apply the logs (both archive and active) to the data at the tracker site. See Media failures during LOGONLY recovery on page 580 for information about what to do if a media failure occurs during LOGONLY recovery. 8. If the RESTORE SYSTEM utility issues a return code of 4, use DSN1COPY to make a copy of SYSUTILX and indexes that are associated with SYSUTILX before you recover or rebuild those objects. DSN1COPY issues a return code of 4 if applying the log marks one or more DB2 objects as RECP or RBDP. 9. Restart DB2 at the tracker site. 10. Issue the DISPLAY DATABASE RESTRICT command to display objects that are marked RECP, RBDP, or LPL and identify which objects are in a utility progress state (such as UTUT or UTRO). Run the RECOVER or REBUILD INDEX utility on these objects, or record which objects are in an exception state so that you can recover them at a later time. The exception states of these objects will be lost in the next recovery cycle. 11. After all recovery has completed at the tracker site, shut down the tracker site DB2. This is the end of the tracker site recovery cycle. If you choose to, you can stop and start the tracker DB2 several times before completing a recovery cycle.
577
v Locate the BSDS in the latest archive log that is now at the tracker site. v Use the change log inventory utility (DSNJU003) to register this archive log in the archive log inventory of the new BSDS. v Use the change log inventory utility (DSNJU003) to register the primary sites active log in the new BSDS. For more details about restoring the BSDS, see step 6 of Remote site recovery from a disaster at the local site on page 563. 3. Use the change log inventory utility (DSNJU003) with the following CRESTART control statement:
CRESTART CREATE,ENDRBA=nnnnnnnnn000,FORWARD=NO,BACKOUT=NO
In this control statement, nnnnnnnnn000 equals ENDRBA + 1 of the latest archive log. You must not specify STARTRBA, because you cannot cold start or skip logs in a tracker system.
Data sharing If you are recovering a data sharing group, you must use the following CRESTART control statement on all members of the data sharing group. The ENDLRSN value must be the same for all members.
CRESTART CREATE,ENDLRSN=nnnnnnnnnnnn,FORWARD=NO,BACKOUT=NO
In this control statement, nnnnnnnnnnnn is the lowest ENDLRSN of all the members to be read during restart. You must specify one of the following values for the ENDLRSN: v If you receive the ENDLRSN from the output of the print log map utility (DSNJU004) or from the console logs using message DSNJ003I, you must use ENDLRSN-1 as the input to the conditional restart. v If you receive the ENDLRSN from the output of the DSN1LOGP utility (DSN1213I message), you can use the value that is displayed. The ENDLRSN or ENDRBA value indicates the end log point for data recovery and for truncating the archive log. With ENDLRSN, the missing log records between the lowest and highest ENDLRSN values for all the members are applied during the next recovery cycle. 4. If the tracker site is a data sharing group, delete all DB2 coupling facility structures before restarting the tracker members. 5. At the tracker site, restart DB2 to begin a tracker site recovery cycle.
Data sharing For data sharing, restart every member of the data sharing group. 6. At the tracker site, submit RECOVER jobs to recover database objects. Run the RECOVER utility with the LOGONLY option on all database objects that do not require recovery from an image copy. See Media failures during LOGONLY recovery on page 580 for information about what to do if a media failure occurs during LOGONLY recovery. You must recover database objects as the following procedure specifies: a. Restore the full image copy or DSN1COPY of SYSUTILX. Recovering SYSUTILX: If you are doing a LOGONLY recovery on SYSUTILX from a previous DSN1COPY backup, make another DSN1COPY
578
Administration Guide
copy of that table space after the LOGONLY recovery is complete and before recovering any other catalog or directory objects. After you recover SYSUTILX and either recover or rebuild its indexes, and before recovering other system and user table spaces, find out what utilities were running at the primary site. b. Recover the catalog and directory. See DB2 Utility Guide and Reference for information about the order of recovery for the catalog and directory objects. User-defined catalog indexes: Unless you require them for catalog query performance, it is not necessary to rebuild user-defined catalog indexes until the tracker DB2 becomes the takeover DB2. However, if you are recovering user-defined catalog indexes, do the recover in this step. c. If needed, recover other system data such as the data definition control support table spaces and the resource limit facility table spaces. d. Recover user data and, optionally, rebuild your indexes. It is not necessary to rebuild indexes unless you intend to run dynamic queries on the data at the tracker site. Because this is a tracker site, DB2 stores the conditional restart ENDRBA or ENDLRSN in the page set after each recovery completes successfully. By storing the log truncation value in the page set, DB2 ensures that it does not skip any log records between recovery cycles. 7. Enter DISPLAY UTIL(*) for a list of currently running utilities. 8. Run the DIAGNOSE utility with the DISPLAY SYSUTIL statement to find out the names of the object on which the utilities are running. Installation SYSOPR authority is required. 9. Perform the following actions for objects at the tracker site on which utilities are pending. Restrictions apply to these objects with because DB2 prevents you from using the TERM UTIIL command to remove pending statuses at a tracker site. v If a LOAD, REORG, REPAIR, or COPY is in progress on any catalog or directory object at the primary site, shut down DB2. You cannot continue recovering through the list of catalog and directory objects. Therefore, you cannot recover any user data. At the next recovery cycle, send a full image copy of the object from the primary site. At the tracker site, use the RECOVER utility to restore the object. v If a LOAD, REORG, REPAIR, or COPY utility is in progress on any user data, at the next recovery cycle, send a full image copy of the object from the primary site. At the tracker site, use the RECOVER utility to restore the object. v If an object is in the restart pending state, use LOGONLY recovery to recover the object when that object is no longer in restart pending state.
Data sharing If read/write shared data (GPB-dependent data) is in the advisory recovery pending state, the tracker DB2 performs recovery processing. Because the tracker DB2 always performs a conditional restart, the postponed indoubt units of recovery are not recognized after the tracker DB2 restarts.
579
10. After all recovery has completed at the tracker site, shut down the tracker site DB2. This is the end of the tracker site recovery cycle. If you choose to, you can stop and start the tracker DB2 several times before completing a recovery cycle.
Data sharing group restarts During recovery cycles, the first member that comes up puts the ENDLRSN value in the shared communications area (SCA) of the coupling facility. If an SCA failure occurs during a recovery cycle, you must go through the recovery cycle again, using the same ENDLRSN value for your conditional restart.
| | | | | | | | |
580
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | Data sharing If this is a data sharing system, delete the coupling facility structures. 4. Start DB2 at the same RBA or ENDLRSN that you used in the most recent tracker site recovery cycle. Specify FORWARD=YES and BACKOUT=YES in the CRESTART statement (this takes care of uncommitted work). 5. Restart the objects that are in GRECP or LPL status using the following START DATABASE command:
START DATABASE(*) SPACENAM(*)
6. If you used to DSN1COPY to create a copy of SYSUTILX in the last recovery cycle, use DSN1COPY to restore that copy. 7. Terminate any in-progress utilities by using the following procedure: a. Enter the command DISPLAY UTIL(*). b. Run the DIAGNOSE utility with DISPLAY SYSUTIL to get the names of objects on which utilities are being run. c. Terminate in-progress utilities using the command TERM UTIL(*). See What to do about utilities in progress on page 571 for more information about how to terminate in-progress utilities and how to recover an object on which a utility was running. 8. Rebuild indexes, including IBM and user-defined indexes on the DB2 catalog and user-defined indexes on table spaces. Recovering at a tracker site that uses the RECOVER utility: If you use the RECOVER utility in the recovery cycles at your tracker site, use the following procedure after a disaster to make the tracker site the takeover site: 1. Restore the BSDS and register the archive log from the last archive you received from the primary site. 2. For scenarios other than data sharing, continue with the next step.
Data sharing If this is a data sharing system, delete the coupling facility structures. 3. Ensure that the DEFER ALL and TRKSITE NO subsystem parameters are specified. 4. If this is a non-data-sharing DB2, the log truncation point varies depending on whether you have received more logs from the primary site since the last recovery cycle: v If you received no more logs from the primary site: Start DB2 using the same ENDRBA that you used on the last tracker cycle. Specify FORWARD=YES and BACKOUT=YES (this takes care of uncommitted work). If you have fully recovered the objects during the previous cycle, then they are current except for any objects that had outstanding units of recovery during restart. Because the previous cycle specified NO for FORWARD and BACKOUT and you have now specified YES, affected data sets are placed in LPL. Restart the objects that are in LPL status using the following START DATABASE command:
START DATABASE(*) SPACENAM(*)
581
After you issue the command, all table spaces and indexes that were previously recovered are now current. Remember to rebuild any indexes that were not recovered during the previous tracker cycle, including user-defined indexes on the DB2 catalog. v If you received more logs from the primary site: Start DB2 using the truncated RBA nnnnnnnnn000, which is the ENDRBA + 1 of the latest archive log. Specify FORWARD=YES and BACKOUT=YES. Run your recoveries as you did during recovery cycles.
Data sharing You must restart every member of the data sharing group, the following CRESTART statement:
CRESTART CREATE,ENDLRSN=nnnnnnnnnnnn,FORWARD=YES,BACKOUT=YES
In this statement, nnnnnnnnnnnn is the LRSN of the last log record to be used during restart. See step 3 of Using the RECOVER utility to establish a recovery cycle on page 577 for more information about determining this value. The takeover DB2s must specify conditional restart with a common ENDLRSN value to allow all remote members to logically truncate the logs at a consistent point. 5. As described for a tracker recovery cycle, recover SYSUTILX from an image copy from the primary site, or from a previous DSN1COPY taken at the tracker site. 6. Terminate any in-progress utilities by using the following procedure: a. Enter the command DISPLAY UTIL(*). b. Run the DIAGNOSE utility with DISPLAY SYSUTIL to get the names of objects on which utilities are being run. c. Terminate in-progress utilities using the command TERM UTIL(*). See What to do about utilities in progress on page 571 for more information about how to terminate in-progress utilities and how to recover an object on which a utility was running. 7. Continue with your recoveries either with the LOGONLY option or image copies. Do not forget to rebuild indexes, including IBM and user-defined indexes on the DB2 catalog and user-defined indexes on table spaces.
582
Administration Guide
To use data mirroring for disaster recovery, you must mirror data from your local site with a method that does not reproduce a rolling disaster at your recovery site. To recover DB2 with data integrity, you must use volumes that end at a consistent point in time for each DB2 subsystem or data sharing group. Mirroring a rolling disaster causes volumes at your recovery site to end over a span of time rather than at one single point. Figure 60 shows how a rolling can cause data to become inconsistent between two subsystems.
Primary
1. 12:00 log update 2. 12:01 update database 3. 12:02 mark log complete
Secondary
Log Device
Log Device
connection is severed
Database Device
Database Device
Example: In a rolling disaster, the following events at the primary site cause data inconsistency at your recovery site. This data inconsistency example follows the same scenario that Figure 60 depicts. 1. A table space is updated in the buffer pool (11:58). 2. The log record is written to disk on logical storage subsystem 1 (12:00). 3. Logical storage subsystem 2 fails (12:01). 4. The update to the table space is externalized to logical storage subsystem 2 but is not written because subsystem 2 failed (12:02). 5. The log record that table space update was made is written to disk on logical storage subsystem 1 (12:03). 6. Logical storage subsystem 1 fails (12:04). Because the logical storage subsystems do not fail at the same point in time, they contain inconsistent data. In this scenario, the log indicates that the update is applied to the table space, but the update is not applied to the data volume that holds this table space. Attention: Any disaster recovery solution that uses data mirroring must guarantee that all volumes at the recovery site contain data for the same point in time.
583
Consistency groups
Generally, a consistency group is a collection of volumes that contain consistent, related data. This data can span logical storage subsystems and disk subsystems. For DB2 specifically, a consistency group contains an entire DB2 subsystem or a DB2 data sharing group. The following DB2 elements comprise a consistency group: v Catalog tables v Directory tables v BSDS v Logs v All user data v ICF catalogs Additionally, all objects within a consistency group must represent the same point in time in at least one of the following situations: v At the time of a backup v After a normal DB2 restart You can use various methods to create consistency groups. The following DB2 services enable you to create consistency groups: v XRC I/O timestamping and system data mover v FlashCopy consistency groups v GDPSfreeze policies v The DB2 SET LOG SUSPEND command v The DB2 BACKUP SYSTEM utility When a rolling disaster strikes your primary site, consistency groups guarantee that all volumes at the recovery site contain data for the same point in time. In a data mirroring environment, you must perform both of the following actions for each consistency group that you maintain: v Mirror data to the secondary volumes in the same sequence that DB2 writes data to the primary volumes. In many processing situations, DB2 must complete one write operation before it begins another write operation on a different disk group or a different storage server. A write operation that depends on a previous write operation is called a dependent write. Do not mirror a dependent write if you have not mirrored the write operation on which the dependent write depends. If you mirror data out of sequence, your recovery site will contain inconsistent data that you cannot use for disaster recovery. v Temporarily suspend and queue write operations to create a group point of consistency when an error occurs between any pair of primary and secondary volumes. When an error occurs that prevents the update of a secondary volume in a single-volume pair, this error might mark the beginning of a rolling disaster. To prevent your secondary site from mirroring a rolling disaster, you must suspend and queue data mirroring with the following steps after a write error between any pairs: 1. Suspend and queue all write operations in the volume pair that experiences a write error. 2. Invoke automation that temporarily suspends and queues data mirroring to all your secondary volumes. 3. Save data at the secondary site at a point of consistency. 4. If a rolling disaster does not strike your primary site, resume normal data mirroring after some amount of time that you have defined. If a rolling
584
Administration Guide
disaster does strike your primary site, follow the recovery procedure in Recovering in a data mirroring environment.
Data sharing For data sharing groups, you must remove old information from the coupling facility. a. Enter the following z/OS command to display the structures for this data sharing group:
D XCF,STRUCTURE,STRNAME=grpname*
b. For group buffer pools and the lock structure, enter the following command to force off the connections in those structures:
SETXCF FORCE,CONNECTION,STRNAME=strname,CONNAME=ALL
c. Delete all the DB2 coupling facility structures by using the following command for each structure:
SETXCF FORCE,STRUCTURE,STRNAME=strname
3. If you are using the distributed data facility, set LOCATION and LUNAME in the BSDS to values that are specific to your new primary site. To set LOCATION and LUNAME run the stand-alone utility DSNJU003 (change log inventory) with the following control statement:
DDF LOCATION=locname, LUNAME=luname
4. Start all DB2 members using local DSNZPARM data sets and perform a normal restart.
Data sharing For data sharing groups, DB2 performs group restart. Shared data sets are set to GRECP (group buffer pool RECOVER-pending) status, and pages are added to the LPL (logical page list). 5. For scenarios other than data sharing, continue to the next step.
585
Data sharing For data sharing groups, perform the following procedure: a. Display all data sets with GRECP or LPL (logical page list) status with the following DB2 command:
-DISPLAY DATABASE(*) SPACENAM(*) RESTRICT(GRECP, LPL) LIMIT(*)
Record the output that this command generates. b. Start the DB2 directory with the following DB2 command:
-START DATABASE(DSNDB01) SPACENAM(*)
6. Use the following DB2 command to display all utilities that the failure interrupted:
-DISPLAY UTILITY(*)
If utilities are pending, record the output that this command and continue to step 7. You cannot restart utilities at a recovery site. You will terminate these utilities in step number 8. If no utilities are pending, continue to step number 9. 7. Use the DIAGNOSE utility to access the SYSUTILX directory table. This directory table is not listed in the DB2 SQL Reference because you can access it only when you use the DIAGNOSE utility. (The DIAGNOSE utility is normally intended to be used under the direction of IBM Software Support.) Use the following control statement when you run the DIAGNOSE utility:
DIAGNOSE DISPLAY SYSUTILX END DIAGNOSE
Record the phase in which each pending utility was interrupted, and record the object on which each utility was operating. 8. Terminate all pending utilities with the following command:
-TERM UTILITY(*)
9. For scenarios other than data sharing, continue to the next step.
Data sharing For data sharing groups, use the following START DATABASE command on each database that contains objects in GRECP or LPL status:
-START DATABASE(database) SPACENAM(*)
When you use the START DATABASE command to recover objects, you do not need to provide DB2 with image copies. Tip: Use up to 10 START DATABASE commands for each DB2 subsystem to increase the speed at which DB2 completes this operation. Multiple commands that run in parallel complete faster than a single command that specifies the same databases. 10. Start all remaining database objects with the following START DATABASE command:
586
Administration Guide
11. For each object that the LOAD utility has placed in a restrictive status, take one of the following actions: v If the object was a target of a LOAD utility control statement that specified SHRLEVEL CHANGE, restart the LOAD utility on this object at your convenience. This object contains valid data. v If the object was a target of a LOAD utility control statement that specified SHRLEVEL REFERENCE and the LOAD job was interrupted before the RELOAD phase, rebuild the indexes on this object. v If the object was a target of a LOAD utility control statement that specified SHRLEVEL REFERENCE and the LOAD job was interrupted during or after the RELOAD phase, you must recover this object to a point in time before this utility was run. v Otherwise, recover the object to a point in time before the LOAD job was run. 12. For each object that the REORG utility has placed in a restrictive status, take one of the following actions: v When the object was a target of a REORG utility control statement that specified SHERLEVEL NONE: If the REORG job was interrupted before the RELOAD phase, no further action is required. This object contains valid data, and the indexes on this object are valid. If the REORG job was interrupted during the RELOAD phase, you must recover this object to a point in time before this utility was run. If the REORG job was interrupted after the RELOAD phase, rebuild the indexes on the object. v When the object was a target of a REORG utility control statement that does not specify SHERLEVEL NONE: If the REORG job was interrupted before the SWITCH phase, no further action is required. This object contains valid data, and the indexes on this object are valid. If the REORG job was interrupted during the SWITCH phase, no further action is required. This object contains valid data, and the indexes on this object are valid. If the REORG job was interrupted after the SWITCH phase, you might need to rebuild non-partitioned secondary indexes.
587
Applications
The following IMS and TSO applications are running at Seattle and accessing both local and remote data. v IMS application, IMSAPP01, at Seattle, accessing local data and remote data by DRDA access at San Jose, which is accessing remote data on behalf of Seattle by DB2 private protocol access at Los Angeles. v TSO application, TSOAPP01, at Seattle, accessing data by DRDA access at San Jose and at Los Angeles.
Threads
The following threads are described and keyed to Figure 61 on page 589. Data base access threads (DBAT) access data on behalf of a thread (either allied or DBAT) at a remote requester. v Allied IMS thread A at Seattle accessing data at San Jose by DRDA access. DBAT at San Jose accessing data for Seattle by DRDA access 1 and requesting data at Los Angeles by DB2 private protocol access 2 . DBAT at Los Angeles accessing data for San Jose by DB2 private protocol access 2 . v Allied TSO thread B at Seattle accessing local data and remote data at San Jose and Los Angeles, by DRDA access. DBAT at San Jose accessing data for Seattle by DRDA access 3 .
588
Administration Guide
IMS
TSO
Figure 61. Resolving indoubt threads. Results of issuing DIS THD TYPE(ACTIVE) at each DB2 system.
The results of issuing the DISPLAY THREAD TYPE(ACTIVE) command to display the status of threads at all DB2 locations are summarized in the boxes of Figure 61. The logical unit of work IDs (LUWIDs) have been shortened for readability: v LUWID=15 would be IBM.SEADB21.15A86A876789.0010 v LUWID=16 would be IBM.SEADB21.16B57B954427.0003 For the purposes of this section, assume that both applications have updated data at all DB2 locations. In the following problem scenarios, the error occurs after the coordinator has recorded the commit decision, but before the affected participants have recorded the commit decision. These participants are therefore indoubt.
589
System action: At SEA, an IFCID 209 trace record is written. After the alert has been generated, and after the message has been displayed, the thread completes the commit, which includes the DBAT at SJ 3 . Concurrently, the thread is added to the list of threads for which the SEA DB2 has an indoubt resolution responsibility. The thread appears in a display thread report for indoubt threads. The thread also appears in a display thread report for active threads until the application terminates. The TSO application is told that the commit succeeded. If the application continues and processes another SQL request, it is rejected with an SQL code indicating it must roll back before any more SQL requests can be processed. This is to insure that the application does not proceed with an assumption based upon data retrieved from LA, or with the expectation that cursor positioning at LA is still intact. At LA, an IFCID 209 trace record is written. After the alert is generated and the message displayed, the DBAT 4 is placed into the indoubt state. All locks remain held until resolution occurs. The thread appears in a display thread report for indoubt threads. The DB2 systems, at both SEA and LA, periodically attempt reconnecting and automatically resolving the indoubt thread. If the communication failure only affects the session being used by the TSO application, and other sessions are available, automatic resolution occurs in a relatively short time. At this time, message DSNL407 is displayed by both DB2 subsystems. Operator action: If message DSNL407 or DSNL415 for the thread identified in message DSNL405 does not appear in a reasonable period of time, call the system programmer. A communication failure is making database resources unavailable. System programmer action: Determine and correct the cause of the communication failure. When corrected, automatic resolution of the indoubt thread occurs within a short time. If the failure cannot be corrected for a long time, call the database administrator. The database administrator might want to make a heuristic decision to release the database resources held for the indoubt thread. See Making a heuristic decision.
590
Administration Guide
the DB2 indoubt thread display report at LA. Then, have an authorized person at SEA perform one of the following actions: v If the coordinator DB2 subsystem is active, or can be started, request a display thread report for indoubt threads, specifying the LUWID of the thread. (Remember that the token used at LA is different than the token used at SEA). If there is no report entry for the LUWID, then the proper action is to abort. If there is an entry for the LUWID, it shows the proper action to take. v If the coordinator DB2 subsystem is not active and cannot be started, and if statistics class 4 was active when DB2 was active, search the SEA SMF data for an IFCID 209 event entry containing the indoubt LUWID. This entry indicates whether the commit decision was commit or abort. v If statistics class 4 is not available, then run, at SEA, the DSN1LOGP utility requesting a summary report. The volume of log data to be searched can be restricted if you can determine the approximate SEA log RBA value in effect at the time of the communication failure. A DSN1LOGP entry in the summary report for the indoubt LUWID indicates whether the decision was commit or abort. After determining the correct action to take, issue the RECOVER INDOUBT command at the LA DB2 subsystem, specifying the LUWID and the correct action. System action: Issuing the RECOVER INDOUBT command at LA results in committing or aborting the indoubt thread. Locks are released. The thread does not disappear from the indoubt thread display until resolution with SEA is completed. The recover indoubt report shows that the thread is either committed or aborted by a heuristic decision. An IFCID 203 trace record is written, recording the heuristic action.
591
Recovery from a DB2 outage at a requester that results in a DB2 cold start
Problem: The abnormal termination of the SEA DB2 has left the two DBATs at SJ 1 , 3 and the LUWID=16 DBAT at LA 4 indoubt. The LUWID=15 DBAT at LA 2 , connected to SJ, is waiting for the SJ DB2 to communicate the final decision. The IMS subsystem at SEA is operational and has the responsibility of resolving indoubt units with the SEA DB2. Symptom: The DB2 subsystem at SEA is started with a conditional restart record in the BSDS indicating a cold start: v When the IMS subsystem reconnects, it attempts to resolve the indoubt thread identified in IMS as NID=A5. IMS has a resource recovery element (RRE) for this thread. The SEA DB2 informs IMS that it has no knowledge of this thread. IMS does not delete the RRE and it can be displayed by using the IMS DISPLAY OASN command. The SEA DB2 also: Generates message DSN3005 for each IMS RRE for which DB2 has no knowledge. Generates an IFCID 234 trace event. v When the DB2 subsystems at SJ and LA reconnect with SEA, each detects that the SEA DB2 has cold started. Both the SJ DB2 and the LA DB2: Display message DSNL411. Generate alert A001. Generate an IFCID 204 trace event. v A display thread report of indoubt threads at both the SJ and LA DB2 subsystems shows the indoubt threads and indicates that the coordinator has cold started. System action: The DB2 subsystem at both SJ and LA accept the cold start connection from SEA. Processing continues, waiting for a heuristic decision to resolve the indoubt threads. System programmer action: Call the database administrator. Operator action: Call the database administrator. Database administrator action: At this point, neither the SJ nor the LA administrator know if the SEA coordinator was a participant of another coordinator. In this scenario, the SEA DB2 subsystem originated LUWID=16. However, it was a participant for LUWID=15, being coordinated by IMS. Also not known to the administrator at LA is the fact that SEA distributed the LUWID=16 thread to SJ where it is also indoubt. Likewise, the administrator at SJ does not know that LA has an indoubt thread for the LUWID=16 thread. It is important that both SJ and LA make the same heuristic decision. It is also important that the administrators at SJ and LA determine the originator of the two-phase commit. The recovery log of the originator indicates whether the decision was commit or abort. The originator might have more accessible functions to determine the decision. Even though the SEA DB2 cold started, you might be able to determine the decision from its recovery log. Or, if the failure occurred before the decision was recorded, you might be able to determine the name of the coordinator, if the
592
Administration Guide
SEA DB2 was a participant. A summary report of the SEA DB2 recovery log can be provided by execution of the DSN1LOGP utility. The LUWID contains the name of the logical unit (LU) where the distributed logical unit of work originated. This logical unit is most likely in the system that originated the two-phase commit. If an application is distributed, any distributed piece of the application can initiate the two-phase commit. In this type of application, the originator of two-phase commit can be at a different system than that identified by the LUWID. With DB2 private protocol access, the two-phase commit can flow only from the system containing the application that initiates distributed SQL processing. In most cases, this is where the application originates. The administrator must determine if the LU name contained in the LUWID is the same as the LU name of the SEA DB2 subsystem. If this is not the case (it is the case in this example), then the SEA DB2 is a participant in the logical unit of work, and is being coordinated by a remote system. You must communicate with that system and request that facilities of that system be used to determine if the logical unit of work is to be committed or aborted. If the LUWID contains the LU name of the SEA DB2 subsystem, then the logical unit of work originated at SEA and is either an IMS, CICS, TSO, or BATCH allied thread of the SEA DB2. The display thread report for indoubt threads at a DB2 participant includes message DSNV458 if the coordinator is remote. This line provides external information provided by the coordinator to assist in identifying the thread. A DB2 coordinator provides the following identifier:
connection-name.correlation-id
where connection-name is: v SERVER: the thread represents a remote application to the DB2 coordinator and uses DRDA access. v BATCH: the thread represents a local batch application to the DB2 coordinator. Anything else represents an IMS or CICS connection name. The thread represents a local application and the commit coordinator is the IMS or CICS system using this connection name. In our example, the administrator at SJ sees that both indoubt threads have a LUWID with the LU name the same as the SEA DB2 LU name, and furthermore, that one thread (LUWID=15) is an IMS thread and the other thread (LUWID=16) is a batch thread. The LA administrator sees that the LA indoubt thread (LUWID=16) originates at SEA DB2 and is a batch thread. The originator of a DB2 batch thread is DB2. To determine the commit or abort decision for the LUWID=16 indoubt threads, the SEA DB2 recovery log must be analyzed, if it can be. The DSN1LOGP utility must be executed against the SEA DB2 recovery log, looking for the LUWID=16 entry. There are three possibilities: 1. No entry is found. That portion of the DB2 recovery log was not available. 2. An entry is found but incomplete. 3. An entry is found and the status is committed or aborted. In the third case, the heuristic decision at SJ and LA for indoubt thread LUWID=16 is indicated by the status indicated in the SEA DB2 recovery log. In the other two cases, the recovery procedure used when cold starting DB2 is important. If
Chapter 22. Recovery scenarios
593
recovery was to a previous point in time, the correct action is to abort. If recovery included repairing the SEA DB2 database, the SEA administrator might know what decision to make. The recovery logs at SJ and LA can help determine what activity took place. If it can be determined that updates were performed at either SJ, LA, or both (but not SEA), then if both SJ and LA make the same heuristic action, there should be no data inconsistency. If updates were also performed at SEA, then looking at the SEA data might help determine what action to take. In any case, both SJ and LA should make the same decision. For the indoubt thread with LUWID=15 (the IMS coordinator), there are several alternative paths to recovery. The SEA DB2 has been restarted. When it reconnects with IMS, message DSN3005 is issued for each thread that IMS is trying to resolve with DB2. The message indicates that DB2 has no knowledge of the thread that is identified by the IMS assigned NID. The outcome for the thread, commit or abort, is included in the message. Trace event IFCID=234 is also written to statistics class 4 containing the same information. If there is only one such message, or one such entry in statistics class 4, then the decision for indoubt thread LUWID=15 is known and can be communicated to the administrator at SJ. If there are multiple such messages, or multiple such trace events, you must match the IMS NID with the network LUWID. Again, DSN1LOGP should be used to analyze the SEA DB2 recovery log if possible. There are now four possibilities: 1. No entry is found. That portion of the DB2 recovery log was not available. 2. An entry is found but incomplete because of lost recovery log. 3. An entry is found and the status is indoubt. 4. An entry is found and the status is committed or aborted. In the fourth case, the heuristic decision at SJ for the indoubt thread LUWID=15 is determined by the status indicated in the SEA DB2 recovery log. If an entry is found whose status is indoubt, DSN1LOGP also reports the IMS NID value. The NID is the unique identifier for the logical unit of work in IMS and CICS. Knowing the NID allows correlation to the DSN3005 message, or to the 234 trace event, which provides the correct decision. If an incomplete entry is found, the NID may or may not have been reported by DSN1LOGP. If it was, use it as previously discussed. If no NID is found, or the SEA DB2 has not been started, or reconnecting to IMS has not occurred, then the correlation-id used by IMS to correlate the IMS logical unit of work to the DB2 thread must be used in a search of the IMS recovery log. The SEA DB2 provided this value to the SJ DB2 when distributing the thread to SJ. The SJ DB2 displays this value in the report generated by DISPLAY THREAD TYPE(INDOUBT). For IMS, the correlation-id is:
pst#.psbname
594
Administration Guide
Recovery from a DB2 outage at a server that results in a DB2 cold start
Problem: This problem is similar to Recovery from a DB2 outage at a requester that results in a DB2 cold start on page 592. If the DB2 subsystem at SJ is cold started instead of the DB2 at SEA, then the LA DB2 has the LUWID=15 2 thread indoubt. The administrator would see that this thread did not originate at SJ, but did originate at SEA. To determine the commit or abort action, the LA administrator would request that DISPLAY THREAD TYPE(INDOUBT) be issued at the SEA DB2, specifying LUWID=15. IMS would not have any indoubt status for this thread, because it would complete the two-phase commit process with the SEA DB2. As described in Communication failure recovery on page 589, the DB2 at SEA tells the application that the commit succeeded. When a participant cold starts, a DB2 coordinator continues to include in the display of indoubt threads all committed threads where the cold starting participant was believed to be indoubt. These entries must be explicitly purged by issuing the RESET INDOUBT command. If a participant has an indoubt thread that cannot be resolved because of coordinator cold start, it can request a display of indoubt threads at the DB2 coordinator to determine the correct action.
595
596
Administration Guide
597
Log Error
Log End
2. DB2 cannot skip over the damaged portion of the log and continue restart processing. Instead, you restrict processing to only a part of the log that is error free. For example, the damage shown in Figure 62 occurs in the log RBA range from X to Y. You can restrict restart to all of the log before X; then changes later than X are not made. Or you can restrict restart to all of the log after Y; then changes between X and Y are not made. In either case, some amount of data is inconsistent. 3. You identify the data that is made inconsistent by your restart decision. With the SUMMARY option, the DSN1LOGP utility scans the accessible portion of the log and identifies work that must be done at restart, namely, the units of recovery to be completed and the page sets that they modified. (For instructions on using DSN1LOGP, see Part 3 of DB2 Utility Guide and Reference.) Because a portion of the log is inaccessible, the summary information might not be complete. In some circumstances, your knowledge of work in progress is needed to identify potential inconsistencies. 4. You use the CHANGE LOG INVENTORY utility to identify the portion of the log to be used at restart, and to tell whether to bypass any phase of recovery. You can choose to do a cold start and bypass the entire log. 5. You restart DB2. Data that is unaffected by omitted portions of the log is available for immediate access. 6. Before you allow access to any data that is affected by the log damage, you resolve all data inconsistencies. That process is described under Resolving inconsistencies resulting from a conditional restart on page 622. Where to start: The specific procedure depends on the phase of restart that was in control when the log problem was detected. On completion, each phase of restart writes a message to the console. You must find the last of those messages in the console log. The next phase after the one identified is the one that was in control when the log problem was detected. Accordingly, start at: v Log initialization or current status rebuild failure recovery on page 599 v Failure during forward log recovery on page 608 v Failure during backward log recovery on page 613 As an alternative, determine which, if any, of the following messages was last received and follow the procedure for that message. Other DSN messages can be issued as well.
Message ID DSNJ001I DSNJ100I DSNJ107I DSNJ1191 DSNR002I Procedure to use Log initialization or current status rebuild failure recovery on page 599 Unresolvable BSDS or log data set problem during restart on page 616 Unresolvable BSDS or log data set problem during restart on page 616 Unresolvable BSDS or log data set problem during restart on page 616 None. Normal restart processing can be expected.
598
Administration Guide
Procedure to use Failure during forward log recovery on page 608 Failure during backward log recovery on page 613 None. Normal restart processing can be expected. Log initialization or current status rebuild failure recovery
Another scenario ( Failure resulting from total or excessive loss of log data on page 619) provides information to use if you determine (by using Log initialization or current status rebuild failure recovery) that an excessive amount (or all) of DB2 log information (BSDS, active, and archive logs) has been lost. The last scenario in this chapter (Resolving inconsistencies resulting from a conditional restart on page 622) can be used to resolve inconsistencies introduced while using one of the restart scenarios in this chapter. If you decide to use Unresolvable BSDS or log data set problem during restart on page 616, it is not necessary to use Resolving inconsistencies resulting from a conditional restart on page 622. Because of the severity of the situations described, the scenarios identify Operations management action, rather than Operator action. Operations management might not be performing all the steps in the procedures, but they must be involved in making the decisions about the steps to be performed.
599
the messages and codes received. The explanation will identify the corrective action that can be taken to resolve the problem. In this case, it is not necessary to read the scenarios in this chapter. v Restore the DB2 log and all data to a prior consistent point and start DB2. This procedure is described in Unresolvable BSDS or log data set problem during restart on page 616. v Start DB2 without completing some database changes. Using a combination of DB2 services and your own knowledge, determine what work will be lost by truncating the log. The procedure for determining the page sets that contain incomplete changes is described in Restart by truncating the log on page 601. In order to obtain a better idea of what the problem is, read one of the following sections, depending on when the failure occurred.
The portion of the log between log RBAs X and Y is inaccessible. For failures that occur during the log initialization phase, the following activities occur: 1. DB2 allocates and opens each active log data set that is not in a stopped state. 2. DB2 reads the log until the last log record is located. 3. During this process, a problem with the log is encountered, preventing DB2 from locating the end of the log. DB2 terminates and issues one of the abend reason codes listed in Table 102 on page 602. During its operations, DB2 periodically records in the BSDS the RBA of the last log record written. This value is displayed in the print log map report as follows:
HIGHEST RBA WRITTEN: 00000742989E
Because this field is updated frequently in the BSDS, the highest RBA written can be interpreted as an approximation of the end of the log. The field is updated in the BSDS when any one of a variety of internal events occurs. In the absence of these internal events, the field is updated each time a complete cycle of log buffers is written. A complete cycle of log buffers occurs when the number of log buffers written equals the value of the OUTPUT BUFFER field of installation panel DSNTIPL. The value in the BSDS is, therefore, relatively close to the end of the log. To find the actual end of the log at restart, DB2 reads the log forward sequentially, starting at the log RBA that approximates the end of the log and continuing until the actual end of the log is located. Because the end of the log is inaccessible in this case, some information has been lost. Units of recovery might have successfully committed or modified additional page sets past point X. Additional data might have been written, including those that are identified with writes pending in the accessible portion of the log. New
600
Administration Guide
units of recovery might have been created, and these might have modified data. Because of the log error, DB2 cannot perceive these events. How to restart DB2 is described under Restart by truncating the log.
The portion of the log between log RBAs X and Y is inaccessible. For failures that occur during the current status rebuild phase, the following activities occur: 1. Log initialization completes successfully. 2. DB2 locates the last checkpoint. (The BSDS contains a record of its location on the log.) 3. DB2 reads the log, beginning at the checkpoint and continuing to the end of the log. 4. DB2 reconstructs the subsystems state as it existed at the prior termination of DB2. 5. During this process, a problem with the log is encountered, preventing DB2 from reading all required log information. DB2 terminates with one of the abend reason codes listed in Table 102 on page 602. Because the end of the log is inaccessible in this case, some information has been lost. Units of recovery might have successfully committed or modified additional page sets past point X. Additional data might have been written, including those that are identified with writes pending in the accessible portion of the log. New units of recovery might have been created, and these might have modified data. Because of the log error, DB2 cannot perceive these events. How to restart DB2 is described under Restart by truncating the log.
Step 1: Find the log RBA after the inaccessible part of the log
The log damage is illustrated in Figure 63 on page 600 and in Figure 64. The range of the log between RBAs X and Y is inaccessible to all DB2 processes. Use the abend reason code accompanying the X'04E' abend and the message on the title of the accompanying dump at the operators console, to find the name and location of a procedure in Table 102 on page 602. Use that procedure to find X and Y.
Chapter 23. Recovery from BSDS or log failure during restart
601
Table 102. Abend reason codes and messages Abend reason code Messag 00D10261 00D10262 00D10263 00D10264 00D10265 00D10266 00D10267 00D10268 00D10329 00D1032A 00D1032B 00D1032B 00D1032C 00E80084 DSNJ012I Procedure RBA 1 General error description Log record is logically damaged
RBA 2 RBA 3 on page 603 RBA 4 on page 603 RBA 5 on page 604 RBA 4 on page 603 RBA 4 on page 603
I/O error occurred while log record was being read Log RBA could not be found in BSDS Allocation error occurred for an archive log data set The operator canceled a request for archive mount Open error occurred for an archive and active log data set Active log data set named in the BSDS could not be allocated during log initialization
Procedure RBA 1: The message accompanying the abend identifies the log RBA of the first inaccessible log record that DB2 detects. For example, the following message indicates a logical error in the log record at log RBA X'7429ABA'.
DSNJ012I ERROR D10265 READING RBA 000007429ABA IN DATA SET DSNCAT.LOGCOPY2.DS01 CONNECTION-ID=DSN, CORRELATION-ID=DSN
Figure 156 on page 1121 shows that a given physical log record is actually a set of logical log records (the log records generally spoken of) and the log control interval definition (LCID). DB2 stores logical records in blocks of physical records to improve efficiency. When this type of an error on the log occurs during log initialization or current status rebuild, all log records within the physical log record are inaccessible. Therefore, the value of X is the log RBA that was reported in the message rounded down to a 4 KB boundary (X'7429000'). Continue with Step 2: Identify lost work and inconsistent data on page 604. Procedure RBA 2: The message accompanying the abend identifies the log RBA of the first inaccessible log record that DB2 detects. For example, the following message indicates an I/O error in the log at RBA X'7429ABA'.
DSNJ106I LOG READ ERROR DSNAME=DSNCAT.LOGCOPY2.DS01, LOGRBA=000007429ABA,ERROR STATUS=0108320C
Figure 156 on page 1121 shows that a given physical log record is actually a set of logical log records (the log records generally spoken of) and the LCID. When this type of an error on the log occurs during log initialization or current status rebuild, all log records within the physical log record and beyond it to the end of the log data set are inaccessible to the log initialization or current status rebuild phase of
602
Administration Guide
restart. Therefore, the value of X is the log RBA that was reported in the message, rounded down to a 4 KB boundary (X'7429000'). Continue with Step 2: Identify lost work and inconsistent data on page 604. Procedure RBA 3: The message accompanying the abend identifies the log RBA of the inaccessible log record. This log RBA is not registered in the BSDS. For example, the following message indicates that the log RBA X'7429ABA' is not registered in the BSDS:
DSNJ113E RBA 000007429ABA NOT IN ANY ACTIVE OR ARCHIVE LOG DATA SET. CONNECTION-ID=DSN, CORRELATION-ID=DSN
The print log map utility can be used to list the contents of the BSDS. For an example of the output, see the description of print log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference. Figure 156 on page 1121 shows that a given physical log record is actually a set of logical log records (the log records generally spoken of) and the LCID. When this type of an error on the log occurs during log initialization or current status rebuild, all log records within the physical log record are inaccessible. Using the print log map output, locate the RBA closest to, but less than, X'7429ABA' for the value of X. If there is not an RBA that is less than X'7429ABA', a considerable amount of log information has been lost. If this is the case, continue with Failure resulting from total or excessive loss of log data on page 619. If X has a value, continue with Step 2: Identify lost work and inconsistent data on page 604. Procedure RBA 4: The message accompanying the abend identifies an entire data set that is inaccessible. For example, the following message indicates that the archive log data set DSNCAT.ARCHLOG1.A0000009 is not accessible, and the STATUS field identifies the code that is associated with the reason for the data set being inaccessible. For an explanation of the STATUS codes, see the explanation for the message in Part 2 of DB2 Messages.
DSNJ103I - csect-name LOG ALLOCATION ERROR DSNAME=DSNCAT.ARCHLOG1.A0000009,ERROR STATUS=04980004 SMS REASON CODE=00000000
To determine the value of X, run the print log map utility to list the log inventory information. For an example of the output, see the description of print log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference. The output provides each log data set name and its associated log RBA rangethe values of X and Y. Verify the accuracy of the information in the print log map utility output for the active log data set with the lowest RBA range. For this active log data set only, the information in the BSDS is potentially inaccurate for the following reasons: v When an active log data set is full, archiving is started. DB2 then selects another active log data set, usually the data set with the lowest RBA. This selection is made so that units of recovery do not have to wait for the archive operation to complete before logging can continue. However, if a data set has not been archived, nothing beyond it has been archived, and the procedure is ended. v When logging has begun on a reusable data set, DB2 updates the BSDS with the new log RBA range for the active log data set, and marks it as Not Reusable. The
Chapter 23. Recovery from BSDS or log failure during restart
603
process of writing the new information to the BSDS can be delayed by other processing. It is therefore possible for a failure to occur between the time that logging to a new active log data set begins and the time that the BSDS is updated. In this case, the BSDS information is not correct. The log RBA that appears for the active log data set with the lowest RBA range in the print log map utility output is valid, provided that the data set is marked Not Reusable. If the data set is marked Reusable, it can be assumed for the purposes of this restart that the starting log RBA (X) for this data set is one greater than the highest log RBA listed in the BSDS for all other active log data sets. Continue with Step 2: Identify lost work and inconsistent data. Procedure RBA 5: The message accompanying the abend identifies an entire data set that is inaccessible. For example, the following message indicates that the archive log data set DSNCAT.ARCHLOG1.A0000009 is not accessible. The operator canceled a request for archive mount, resulting in the following message:
DSNJ007I OPERATOR CANCELED MOUNT OF ARCHIVE DSNCAT.ARCHLOG1.A0000009 VOLSER=5B225.
To determine the value of X, run the print log map utility to list the log inventory information. For an example of the output, see the description of print log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference. The output provides each log data set name and its associated log RBA range: the values of X and Y. Continue with Step 2: Identify lost work and inconsistent data.
604
Administration Guide
If the checkpoint is found, look at the date and time that it was created. If the checkpoint is several days old (and DB2 was operational during the interim), either follow the procedure under Failure resulting from total or excessive loss of log data on page 619 or the procedure under Unresolvable BSDS or log data set problem during restart on page 616. Otherwise, continue with the next step. 2. Determine what work is lost and what data is inconsistent. The portion of the log representing activity that occurred before the failure provides information about work that was in progress at that point. From this information, it might be possible to deduce the work that was in progress within the inaccessible portion of the log. If use of DB2 was limited at the time or if DB2 was dedicated to a small number of activities (such as batch jobs performing database loads or image copies), it might be possible to accurately identify the page sets that were made inconsistent. To make the identification, extract a summary of the log activity up to the point of damage in the log by using the DSN1LOGP utility described in Part 3 of DB2 Utility Guide and Reference. Use the DSN1LOGP utility to specify the BEGIN CHECKPOINT RBA prior to the point of failure, which was determined in the previous step as the RBASTART. End the DSN1LOGP scan prior to the point of failure on the log (X - 1) by using the RBAEND specification. Specifying the last complete checkpoint is very important for ensuring that complete information is obtained from DSN1LOGP. Specify the SUMMARY(ONLY) option to produce a summary report. Figure 65 is an example of a DSN1LOGP job to obtain summary information for the checkpoint discussed previously.
//ONE EXEC PGM=DSN1LOGP //STEPLIB DD DSN=prefix.SDSNLOAD,DISP=SHR //SYSABEND DD SYSOUT=A //SYSPRINT DD SYSOUT=A //SYSSUMRY DD SYSOUT=A //BSDS DD DSN=DSNCAT.BSDS01,DISP=SHR //SYSIN DD * RBASTART (7425468) RBAEND (7428FFF) SUMMARY (ONLY) /*
Figure 65. Sample JCL for obtaining DSN1LOGP summary output for restart
3. Analyze the DSN1LOGP utility output. The summary report that is placed in the SYSSUMRY file includes two sections of information: a summary of completed events (not shown here) and a restart summary shown in Figure 66 on page 606. A description of the sample output appears after the figure.
605
DSN1157I RESTART SUMMARY DSN1153I DSN1LSIT CHECKPOINT STARTRBA=000007425468 ENDRBA=000007426C6C STARTLRSN=AA527AA809DF ENDLRSN=AA527AA829F4 DATE=92.284 TIME=14:49:25 DSN1162I DSN1LPRT UR CONNID=BATCH CORRID=PROGRAM2 AUTHID=ADMF001 START DATE=92.284 TIME=11:12:01 DISP=INFLIGHT INFO=COMPLETE STARTRBA=0000063DA17B STARTLRSN=A974FAFF27FF NID=* LUWID=DB2NET.LUND0.A974FAFE6E77.0001 COORDINATOR=* PARTICIPANTS=* DATA MODIFIED: DATABASE=0101=STVDB02 PAGESET=0002=STVTS02 PLAN=TCEU02 PLAN=TCEU02
DSN1162I DSN1LPRT UR CONNID=BATCH CORRID=PROGRAM5 AUTHID=ADMF001 START DATE=92.284 TIME=11:21:02 DISP=INFLIGHT INFO=COMPLETE STARTRBA=000006A57C57 STARTLRSN=A974FAFF2801 NID=* LUWID=DB2NET.LUND0.A974FAFE6FFF.0003 COORDINATOR=* PARTICIPANTS=* DATA MODIFIED: DATABASE=0104=STVDB05 PAGESET=0002=STVTS05
DSN1162I DSN1LPRT UR CONNID=TEST0001 CORRID=CTHDCORID001 AUTHID=MULT002 START DATE=92.278 TIME=06:49:33 DISP=INDOUBT INFO=PARTIAL STARTRBA=000005FBCC4F STARTLRSN=A974FBAF2302 NID=* LUWID=DB2NET.LUND0.B978FAFEFAB1.0000 COORDINATOR=* PARTICIPANTS=* NO DATA MODIFIED (BASED ON INCOMPLETE LOG INFORMATION)
PLAN=DONSQL1
DSN1162I UR CONNID=BATCH CORRID=PROGRAM2 AUTHID=ADMF001 PLAN=TCEU02 START DATE=92.284 TIME=11:12:01 DISP=INFLIGHT INFO=COMPLETE START=0000063DA17B DSN1160I DATABASE WRITES PENDING: DATABASE=0001=DSNDB01 PAGESET=004F=SYSUTIL START=000007425468 DATABASE=0102 PAGESET=0015 START=000007425468
The following heading message is followed by messages that identify the units of recovery that have not yet completed and the page sets that they modified:
DSN1157I RESTART SUMMARY
Following the summary of outstanding units of recovery is a summary of page sets with database writes pending. In each case (units of recovery or databases with pending writes), the earliest required log record is identified by the START information. In this context, START information is the log RBA of the earliest log record required in order to complete outstanding writes for this page set. Those units of recovery with a START log RBA equal to, or prior to, the point Y cannot be completed at restart. All page sets modified by such units of recovery are inconsistent after completion of restart using this procedure. All page sets identified in message DSN1160I with a START log RBA value equal to, or prior to, the point Y have database changes that cannot be written to disk. As in the case previously described, all such page sets are inconsistent after completion of restart using this procedure. At this point, it is only necessary to identify the page sets in preparation for restart. After restart, the problems in the page sets that are inconsistent must be resolved. Because the end of the log is inaccessible, some information has been lost, therefore, the information is inaccurate. Some of the units of recovery that appear to be inflight might have successfully committed, or they could have modified additional page sets beyond point X. Additional data could have been
606
Administration Guide
written, including those page sets that are identified as having writes pending in the accessible portion of the log. New units of recovery could have been created, and these can have modified data. DB2 cannot detect that these events occurred. From this and other information (such as system accounting information and console messages), it could be possible to determine what work was actually outstanding and which page sets will be inconsistent after starting DB2, because the record of each event contains the date and time to help determine how recent the information is. In addition, the information is displayed in chronological sequence.
Recovering and backing out units of recovery with lost information can introduce more inconsistencies than the incomplete units of recovery. When you start DB2 after you run DSNJU003 with the FORWARD=NO and the BACKOUT=NO change log inventory options, DB2 performs the following actions (in Step 5: Start DB2 on page 608): 1. Discards from the checkpoint queue any entries with RBAs beyond the ENDRBA value in the CRCR (which is X'7429000' in the previous example). 2. Reconstructs the system status up to the point of log truncation. 3. Performs pending database writes that the truncated log specifies that have not already applied to the data. You can use the DSN1LOGP utility to identify these writes. No forward recovery processing occurs for units of work in a FORWARD=NO conditional restart. All pending writes for in-commit and
607
indoubt units of recovery are applied to the data. The processing for the different unit of work states that is described in Phase 3: Forward log recovery on page 462 does not occur. 4. Marks all units of recovery that have committed or are indoubt as complete on the log. 5. Leaves inflight and in-abort units of recovery incomplete. Inconsistent data is left in tables modified by inflight or indoubt URs. When you specify a BACKOUT=NO conditional restart, inflight and in-abort units of recovery are not backed out. In a conditional restart that truncates the log, BACKOUT=NO minimizes DB2 errors for the following reasons: v Inflight units of recovery might have been committed in the portion of the log that the conditional restart discarded. If these units of recovery are backed out as Phase 4: Backward log recovery on page 463 describes, DB2 can back out database changes incompletely, which introduces additional errors. v Data modified by in-abort units of recovery could have been modified again after the point of damage on the log. For in-abort units of recovery DB2 could have written backout processing to disk after the point of log truncation. If these units of recovery are backed out as Phase 4: Backward log recovery on page 463 describes, DB2 can introduce additional data inconsistencies by backing out units of recovery that are already partially or fully backed out.
608
Administration Guide
v Restore the DB2 log and all data to a prior consistent point and start DB2. This procedure is described in Unresolvable BSDS or log data set problem during restart on page 616. v Start DB2 without completing some database changes. The exact changes cannot be identified; all that can be determined is which page sets might have incomplete changes. The procedure for determining which page sets contain incomplete changes is described in Starting DB2 by limiting restart processing. Continue reading this chapter to obtain a better idea of what the problem is.
Log Error
The portion of the log between log RBA X and Y is inaccessible. The log initialization and current status rebuild phases of restart completed successfully. Restart processing was reading the log in a forward direction beginning at some point prior to X and continuing to the end of the log. Because of the inaccessibility of log data (between points X and Y), restart processing cannot guarantee the completion of any work that was outstanding at restart prior to point Y. For purposes of discussion, assume the following work was outstanding at restart: v The unit of recovery identified as URID1 was in-commit. v The unit of recovery identified as URID2 was inflight. v The unit of recovery identified as URID3 was in-commit. v The unit of recovery identified as URID4 was inflight. v Page set A had writes pending prior to the error on the log, continuing to the end of the log. v Page set B had writes pending after the error on the log, continuing to the end of the log. The earliest log record for each unit of recovery is identified on the log line in Figure 67. In order for DB2 to complete each unit of recovery, DB2 requires access to all log records from the beginning point for each unit of recovery to the end of the log. The error on the log prevents DB2 from guaranteeing the completion of any outstanding work that began prior to point Y on the log. Consequently, database changes made by URID1 and URID2 might not be fully committed or backed out. Writes pending for page set A (from points in the log prior to Y) will be lost.
609
out) and the page sets that these units of recovery changed. You must determine which page sets are involved because after this procedure is used, the page sets will contain inconsistencies that must be resolved. In addition, using this procedure results in the completion of all database writes that are pending. For a description of this process of writing database pages to disk, see Tuning database buffer pools on page 671.
Step 1: Find the log RBA after the inaccessible part of the log
The log damage is shown in Figure 67 on page 609. The range of the log between RBA X and RBA Y is inaccessible to all DB2 processes. Use the abend reason code accompanying the X'04E' abend, and the message on the title of the accompanying dump at the operators console, to find the name and the location of a procedure in Table 103. Use that procedure to find X and Y.
Table 103. Abend reason codes and messages Abend reason code Message 00D10261 00D10262 00D10263 00D10264 00D10265 00D10266 00D10267 00D10268 00D10329 00D1032A 00D1032B 00D1032B 00D1032C 00E80084 DSNJ012I Procedure RBA 1 General error description Log record is logically damaged
RBA 2 on page 611 RBA 3 on page 611 RBA 4 on page 612 RBA 5 on page 612 RBA 4 on page 612 RBA 4 on page 612
I/O error occurred while log record was being read Log RBA could not be found in BSDS Allocation error occurred for an archive log data set The operator canceled a request for archive mount Open error occurred for an archive log data set Active log data set named in the BSDS could not be allocated during log initialization.
Procedure RBA 1: The message accompanying the abend identifies the log RBA of the first inaccessible log record that DB2 detects. For example, the following message indicates a logical error in the log record at log RBA X'7429ABA':
DSNJ012I ERROR D10265 READING RBA 000007429ABA IN DATA SET DSNCAT.LOGCOPY2.DS01 CONNECTION-ID=DSN CORRELATION-ID=DSN
Figure 156 on page 1121 shows that a given physical log record is actually a set of logical log records (the log records generally spoken of) and the log control interval definition (LCID). When this type of an error on the log occurs during forward log recovery, all log records within the physical log record, as described, are inaccessible. Therefore, the value of X is the log RBA that was reported in the message, rounded down to a 4K boundary (that is, X'7429000').
610
Administration Guide
For purposes of following the steps in this procedure, assume that the extent of damage is limited to the single physical log record. Therefore, calculate the value of Y as the log RBA that was reported in the message, rounded up to the end of the 4K boundary (that is, X'7429FFF'). Continue with Step 2: Identify incomplete units of recovery and inconsistent page sets on page 612. Procedure RBA 2: The message accompanying the abend identifies the log RBA of the first inaccessible log record that DB2 detects. For example, the following message indicates an I/O error in the log at RBA X'7429ABA':
DSNJ106I LOG READ ERROR DSNAME=DSNCAT.LOGCOPY2.DS01, LOGRBA=000007429ABA, ERROR STATUS=0108320C
Figure 156 on page 1121 shows that a given physical log record is actually a set of logical log records (the log records generally spoken of) and the LCID. When this type of an error on the log occurs during forward log recovery, all log records within the physical log record and beyond it to the end of the log data set are inaccessible to the forward recovery phase of restart. Therefore, the value of X is the log RBA that was reported in the message, rounded down to a 4K boundary (that is, X'7429000'). To determine the value of Y, run the print log map utility to list the log inventory information. For an example of this output, see the description of print log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference. Locate the data set name and its associated log RBA range. The RBA of the end of the range is the value Y. Continue with Step 2: Identify incomplete units of recovery and inconsistent page sets on page 612. Procedure RBA 3: The message accompanying the abend identifies the log RBA of the inaccessible log record. This log RBA is not registered in the BSDS. For example, the following message indicates that the log RBA X'7429ABA' is not registered in the BSDS:
DSNJ113E RBA 000007429ABA NOT IN ANY ACTIVE OR ARCHIVE LOG DATA SET. CONNECTION-ID=DSN, CORRELATION-ID=DSN
Use the print log map utility to list the contents of the BSDS. For an example of this output, see the description of print log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference. Figure 156 on page 1121 shows that a given physical log record is actually a set of logical log records (the log records generally spoken of) and the LCID. When this type of error on the log occurs during forward log recovery, all log records within the physical log record are inaccessible. Using the print log map output, locate the RBA closest to, but less than, X'7429ABA'. This is the value of X. If an RBA less than X'7429ABA' cannot be found, the value of X is zero. Locate the RBA closest to, but greater than, X'7429ABA'. This is the value of Y. Continue with Step 2: Identify incomplete units of recovery and inconsistent page sets on page 612.
611
Procedure RBA 4: The message accompanying the abend identifies an entire data set that is inaccessible. For example, the following message indicates that the archive log data set DSNCAT.ARCHLOG1.A0000009 is not accessible. The STATUS field identifies the code that is associated with the reason for the data set being inaccessible. For an explanation of the STATUS codes, see the explanation for the message in DB2 Messages.
DSNJ103I LOG ALLOCATION ERROR DSNAME=DSNCAT.ARCHLOG1.A0000009, ERROR STATUS=04980004 SMS REASON CODE=00000000
To determine the values of X and Y, run the print log map utility to list the log inventory information. For an example of this output, see the description of print log map (DSNJU004) in Part 2 of DB2 Utility Guide and Reference. The output provides each log data set name and its associated log RBA range: the values of X and Y. Continue with Step 2: Identify incomplete units of recovery and inconsistent page sets. Procedure RBA 5: The message accompanying the abend identifies an entire data set that is inaccessible. For example, the following message indicates that the archive log data set DSNCAT.ARCHLOG1.A0000009 is not accessible. The operator canceled a request for archive mount resulting in the following message:
DSNJ007I OPERATOR CANCELED MOUNT OF ARCHIVE DSNCAT.ARCHLOG1.A0000009 VOLSER=5B225.
To determine the values of X and Y, run the print log map utility to list the log inventory information. For an example of the output, see the description of print log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference. The output provides each log data set name and its associated log RBA range: the values of X and Y. Continue with Step 2: Identify incomplete units of recovery and inconsistent page sets.
v The print log map utility output identifies the last checkpoint, including its BEGIN CHECKPOINT RBA. 2. Run the DSN1LOGP utility to obtain a report of the outstanding work that is to be completed at the next restart of DB2. When you run the DSN1LOGP utility, specify the checkpoint RBA as the STARTRBA and the SUMMARY(ONLY) option. It is very important that you include the last complete checkpoint from running DSN1LOGP in order to obtain complete information. Figure 65 on page 605 shows an example of the DSN1LOGP job submitted for the checkpoint that was reported in the DSNR003I message.
612
Administration Guide
Analyze the output of the DSN1LOGP utility. The summary report that is placed in the SYSSUMRY file contains two sections of information. For an example of SUMMARY output, see Figure 66 on page 606; and for an example of the program that results in the output, see Figure 65 on page 605.
Step 3: Restrict restart processing to the part of the log after the damage
Use the change log inventory utility to create a conditional restart control record (CRCR) in the BSDS. Identify the accessible portion of the log beyond the damage by using the STARTRBA specification, which will be used at the next restart. Specify the value Y+1 (that is, if Y is X'7429FFF', specify STARTRBA=742A000). Restart will restrict its processing to the portion of the log beginning with the specified STARTRBA and continuing to the end of the log. A sample change log inventory utility control statement is:
CRESTART CREATE,STARTRBA=742A000
613
described in Bypassing backout before restarting. Continue reading this chapter to obtain a better idea of how to fix the problem.
Log Error
The portion of the log between log RBA X and Y is inaccessible. Restart was reading the log in a backward direction beginning at the end of the log and continuing backward to the point marked by Begin URID5 in order to back out the changes made by URID5, URID6, and URID7. You can assume that DB2 determined that these units of recovery were inflight or in-abort. The portion of the log from point Y to the end has been processed. However, the portion of the log from Begin URID5 to point Y has not been processed and cannot be processed by restart. Consequently, database changes made by URID5 and URID6 might not be fully backed out. All database changes made by URID7 have been fully backed out, but these database changes might not have been written to disk. A subsequent restart of DB2 causes these changes to be written to disk during forward recovery.
v Print log map utility output identifies the last checkpoint, including its BEGIN CHECKPOINT RBA. b. Execute the DSN1LOGP utility to obtain a report of the outstanding work that is to be completed at the next restart of DB2. When you run DSN1LOGP, specify the checkpoint RBA as the RBASTART and the SUMMARY(ONLY) option. Include the last complete checkpoint in the execution of DSN1LOGP in order to obtain complete information. Figure 66 on page 606 shows an example of the DSN1LOGP job submitted for the checkpoint that was reported in the DSNR003I message.
614
Administration Guide
Analyze the output of the DSN1LOGP utility. The summary report that is placed in the SYSSUMRY file contains two sections of information. The sample report output shown in Figure 66 on page 606 resulted from the invocation shown in Figure 65 on page 605. The following description refers to that sample output. The first section is headed by the following message:
DSN1150I SUMMARY OF COMPLETED EVENTS
That message is followed by others that identify completed events, such as completed units of recovery. That section does not apply to this procedure. The second section is headed by this message:
DSN1157I RESTART SUMMARY
That message is followed by others that identify units of recovery that are not yet completed and the page sets that they modified. An example of the DSN1162I messages is shown in Figure 66 on page 606. After the summary of outstanding units of recovery is a summary of page sets with database writes pending. An example of the DSN1160I message is shown in Figure 66 on page 606. The restart processing that failed was able to complete all units of recovery processing within the accessible scope of the log after point Y. Database writes for these units of recovery are completed during the forward recovery phase of restart on the next restart. Therefore, do not bypass the forward recovery phase. All units of recovery that can be backed out have been backed out. All remaining units of recovery to be backed out (DISP=INFLIGHT or DISP=IN-ABORT) are bypassed on the next restart because their STARTRBA values are less than the RBA of point Y. Therefore, all page sets modified by those units of recovery are inconsistent after restart. This means that some changes to data might not be backed out. At this point, it is only necessary to identify the page sets in preparation for restart. 2. Direct restart to bypass backward recovery processing. Use the change log inventory utility to create a conditional restart control record (CRCR) in the BSDS. Direct restart to bypass backward recovery processing during the subsequent restart by using the BACKOUT specification. At restart, all units of recovery requiring backout are declared complete by DB2, and log records are generated to note the end of the unit of recovery. The change log inventory utility control statement is:
CRESTART CREATE,BACKOUT=NO
3. Start DB2. At the end of restart, the CRCR is marked DEACTIVATED to prevent its use on a subsequent restart. Until the restart is complete, the CRCR is in effect. Use START DB2 ACCESS(MAINT) until data is consistent or page sets are stopped. 4. Resolve all inconsistent data problems. After the successful start of DB2, all data inconsistency problems must be resolved. Resolving inconsistencies resulting from a conditional restart on page 622 describes how to do this. At this time, all other data can be made available for use.
615
System programmer action: 1. Stop DB2 with the STOP DB2 command, if it has not already been stopped automatically as a result of the problem. 2. Check any other messages and reason codes displayed and correct the errors indicated. Locate the output from an old print log map run, and identify the data set that contains the missing RBA. If the data set has not been reused, run the change log inventory utility to add this data set back into the inventory of log data sets. 3. Increase the maximum number of archive log volumes that can be recorded in the BSDS. To do this, update the MAXARCH system parameter value as follows: a. Start the installation CLIST. b. On panel DSNTIPA1, select UPDATE mode. c. On panel DSNTIPT, change any data set names that are not correct. d. On panel DSNTIPB, select the ARCHIVE LOG DATA SET PARAMETERS option. e. On panel DSNTIPA, increase the value of RECORDING MAX. f. When the installation CLIST editing completes, rerun job DSNTIJUZ to recompile the system parameters. 4. Start DB2 with the START DB2 command. For more information about updating DB2 system parameters, see Part 2 of DB2 Installation Guide. For instructions about adding an old archive data set, refer to Changing the BSDS log inventory on page 442. Also, see Part 3 of DB2 Utility Guide and Reference for additional information about the change log inventory utility.
616
Administration Guide
System action: DB2 cannot be restarted unless the following procedure is used: Operations management action: In serious cases such as this, it can be necessary to fall back to a prior shutdown level. If this procedure is used, all database changes between the shutdown point and the present will be lost, but all the data retained will be consistent within DB2. If it is necessary to fall back, read Preparing to recover to a prior point of consistency on page 485. If too much log information has been lost, use the alternative approach described in Failure resulting from total or excessive loss of log data on page 619.
617
v For table spaces and indexes that might have been changed after the shutdown point, use the DB2 RECOVER utility to recover these table spaces and indexes. They must be recovered in the order indicated in Part 2 of DB2 Utility Guide and Reference. v For data that has not been changed after the shutdown point (data used with RO access), it is not necessary to use RECOVER or DROP. v For table spaces that were deleted after the shutdown point, issue the DROP statement. These table spaces will not be recovered. v Any objects created after the shutdown point should be re-created. You must recover all data that has potentially been modified after the shutdown point. If the RECOVER utility is not used to recover modified data, serious problems can occur because of data inconsistency. If an attempt is made to access data that is inconsistent, any of the following events can occur (and the list is not comprehensive): v It is possible to successfully access the correct data. v Data can be accessed without DB2 recognizing any problem, but it might not be the data you want (the index might be pointing to the wrong data). v DB2 might recognize that a page is logically incorrect and, as a result, abend the subsystem with an X'04E' abend completion code and an abend reason code of X'00C90102'. v DB2 might notice that a page was updated after the shutdown point and, as a result, abend the requester with an X'04E' abend completion code and an abend reason code of X'00C200C1'. 7. Analyze the CICS log and the IMS log to determine the work that must be redone (work that was lost because of shutdown at the previous point). Inform all TSO users, QMF users, and batch users for which no transaction log tracking has been performed, about the decision to fall back to a previous point. 8. When DB2 is started after being shut down, indoubt units of recovery can exist. This occurs if transactions are indoubt when the command STOP DB2 MODE (QUIESCE) is given. When DB2 is started again, these transactions will still be indoubt to DB2. IMS and CICS cannot know the disposition of these units of recovery. To resolve these indoubt units of recovery, use the command RECOVER INDOUBT. 9. If a table space was dropped and re-created after the shutdown point, it should be dropped and re-created again after DB2 is restarted. To do this, use SQL DROP and SQL CREATE statements. Do not use the RECOVER utility to accomplish this, because it will result in the old version (which can contain inconsistent data) being recovered. 10. If any table spaces and indexes were created after the shutdown point, these must be re-created after DB2 is restarted. There are two ways to accomplish this: v For data sets defined in DB2 storage groups, use the CREATE TABLESPACE statement and specify the appropriate storage group names. DB2 automatically deletes the old data set and redefines a new one. v For user-defined data sets, use access method services DELETE to delete the old data sets. After these data sets have been deleted, use access method services DEFINE to redefine them; then use the CREATE TABLESPACE statement.
618
Administration Guide
Continue with step 4. b. Determine the highest possible log RBA of the prior log. From previous console logs written when DB2 was operational, locate the last DSNJ001I message. When DB2 switches to a new active log data set, this message is
Chapter 23. Recovery from BSDS or log failure during restart
619
written to the console, identifying the data set name and the highest potential log RBA that can be written for that data set. Assume that this is the value X'8BFFF'. Add one to this value (X'8C000'), and create a conditional restart control record specifying the following change log inventory control statement:
CRESTART CREATE,STARTRBA=8C000,ENDRBA=8C000
When DB2 starts, all phases of restart are bypassed and logging begins at log RBA X'8C000'. If this method is chosen, it is not necessary to use the DSN1COPY RESET option and a lot of time is saved. 4. Start DB2. Use START DB2 ACCESS(MAINT) until data is consistent or page sets are stopped. 5. After restart, resolve all inconsistent data as described in Resolving inconsistencies resulting from a conditional restart on page 622.
620
Administration Guide
log data from the active log data set to the archive log data set, then pads the archive log data set with binary zeroes to fill a block. In order for the access method services REPRO command to be able to copy all of the data from the archive log data set to a newly defined active log data set, the new active log data set might need to be bigger than the original one. For example, if the block size of the archive log data set is 28 KB, and the active log data set contains 80 KB of data, DB2 copies the 80 KB and pads the archive log data set with 4 KB of nulls to fill the last block. Thus, the archive log data set now contains 84 KB of data instead of 80 KB. In order for the access method services REPRO command to complete successfully, the active log data set must be able to hold 84 KB, rather than just 80 KB of data. v If you are not concerned about read operations against the archive log data sets, complete the two steps that appear in the preceding list (as though the archive data sets did not exist). 6. Choose the appropriate point for DB2 to start logging (X'8C000') as described in Total loss of the log on page 619. 7. To restart DB2 without using any log data, create a CRCR, as described for the change log inventory utility (DSNJU003) in Part 3 of DB2 Utility Guide and Reference. 8. Start DB2. Use START DB2 ACCESS(MAINT) until data is consistent or page sets are stopped. 9. After restart, resolve all inconsistent data as described in Resolving inconsistencies resulting from a conditional restart on page 622. This procedure will cause all phases of restart to be bypassed and logging to begin at log RBA X'8C000'. It will create a gap in the log between the highest RBA kept in the BSDS and X'8C000', and that portion of the log will be inaccessible. No DB2 process can tolerate a gap, including RECOVER. Therefore, all data must be image copied after a cold start. Even data that is known to be consistent must be image copied again when a gap is created in the log. There is another approach to doing a cold start that does not create a gap in the log. This is only a method for eliminating the gap in the physical record. It does not mean that you can use a cold start to resolve the logical inconsistencies. The procedure is as follows: 1. Locate the last valid log record by using DSN1LOGP to scan the log. (Message DSN1213I identifies the last valid log RBA.) 2. Begin at an RBA that is known to be valid. If message DSN1213I indicated that the last valid log RBA is at X'89158', round this value up to the next 4K boundary (X'8A000'). 3. Create a CRCR similar to the CRCR that the following command specifies:
CRESTART CREATE,STARTRBA=8A000,ENDRBA=8A000
4. Use START DB2 ACCESS(MAINT) until data is consistent or page sets are stopped. 5. Now, take image copies of all data for which data modifications were recorded beyond log RBA X'8A000'. If you do not know what data was modified, take image copies of all data. If image copies are not taken of data that has been modified beyond the log RBA used in the CRESTART statement, future RECOVER operations can fail or result in inconsistent data.
621
After restart, resolve all inconsistent data as described in Resolving inconsistencies resulting from a conditional restart.
622
Administration Guide
v You did a conditional restart that altered or truncated the log. v The log is damaged. v Part of the log is no longer accessible. The first thing to do after a conditional restart is to take image copies of all DB2 table spaces, except those that are inconsistent. For those table spaces suspected of being inconsistent, resolve the inconsistencies and then obtain image copies of them. A cold start might cause down-level page set errors. Some of these errors cause message DSNB232I to be displayed during DB2 restart. After you restart DB2, check the console log for down-level page set messages. If any of those messages exist, correct the errors before you take image copies of the affected data sets. Other down-level page set errors are not detected by DB2 during restart. If you use the COPY utility with the SHRLEVEL REFERENCE option to make image copies, the COPY utility will issue message DSNB232I when it encounters down-level page sets. Correct those errors and continue making image copies. If you use some other method to make image copies, you will find out about down-level errors during normal DB2 operation. Recovery from down-level page sets on page 548 describes methods for correcting down-level page set errors. Pay particular attention to DB2 subsystem table spaces. If any are inconsistent, recover all of them in the order shown in the discussion on recovering catalog and directory objects in Part 2 of DB2 Utility Guide and Reference. When a portion of the DB2 recovery log becomes inaccessible, all DB2 recovery processes have difficulty operating successfully, including restart, RECOVER, and deferred restart processing. Conditional restart allowed circumvention of the problem during the restart process. To ensure that RECOVER does not attempt to access the inaccessible portions of the log, secure a copy (either full or incremental) that does not require such access. A failure occurs any time a DB2 process (such as the RECOVER utility) attempts to access an inaccessible portion of the log. You cannot be sure which DB2 processes must use that portion of the recovery log, and, therefore, you must assume that all data recovery requires that portion of the log. 2. Resolve database inconsistencies. If you determine that the existing inconsistencies involve indexes only (not data), use the utility RECOVER INDEX. If the inconsistencies involve data (either user data or DB2 subsystem data), continue reading this section. Inconsistencies in DB2 subsystem databases DSNDB01 and DSNDB06 must be resolved before inconsistencies in other databases can be resolved. This is necessary because the subsystem databases describe all other databases, and access to other databases requires information from DSNDB01 and DSNDB06. If the table space that cannot be recovered (and is thus inconsistent) is being dropped, either all rows are being deleted or the table is not necessary. In either case, drop the table when DB2 is restarted, and do not bother to resolve the inconsistencies before restarting DB2. Any one of the following three procedures can be used to resolve data inconsistencies. However, using one of the first two procedures is advisable because of the complexity of the third procedure.
623
624
Administration Guide
v For a description of stored data and index formats, refer to Part 6 of DB2 Diagnosis Guide and Reference. v DB2 stores data in data pages. The structure of data in a data page must conform to a set of rules for DB2 to be able to process the data accurately. Using a conditional restart process does not cause violations to this set of rules; but, if violations existed prior to conditional restart, they will continue to exist after conditional restart. Therefore, use DSN1COPY with the CHECK option. v DB2 uses several types of pointers in accessing data. Each type (indexes, hashes, and links) is described in Part 6 of DB2 Diagnosis Guide and Reference. Look for these pointers and manually ensure their consistency. Hash and link pointers exist in the DB2 directory database; link pointers also exist in the catalog database. DB2 uses these pointers to access data. During a conditional restart, it is possible for data pages to be modified without update of the corresponding pointers. When this occurs, one of the following actions can occur: If a pointer addresses data that is nonexistent or incorrect, DB2 abends the request. If SQL is used to access the data, a message identifying the condition and the page in question is issued. If data exists but no pointer addresses it, that data is virtually invisible to all functions that attempt to access it via the damaged hash or link pointer. The data might, however, be visible and accessible by some functions, such as SQL functions that use some other pointer that was not damaged. As might be expected, this situation can result in inconsistencies. If a row containing a varying-length field is updated, it can increase in size. If the page in which the row is stored does not contain enough available space to store the additional data, the row is placed in another data page, and a pointer to the new data page is stored in the original data page. After a conditional restart, one of the following conditions can exist. The row of data exists, but the pointer to that row does not exist. In this case, the row is invisible and the data cannot be accessed. The pointer to the row exists, but the row itself no longer exists. DB2 abends the requester when any operation (for instance, a SELECT) attempts to access the data. If termination occurs, one or more messages will be received that identify the condition and the page containing the pointer. When these inconsistencies are encountered, use the REPAIR utility to resolve them, as described in Part 2 of DB2 Utility Guide and Reference. v If the log has been truncated, there can be problems changing data via the REPAIR utility. Each data and index page contains the log RBA of the last recovery log record that was applied against the page. DB2 does not allow modification of a page containing a log RBA that is higher than the current end of the log. If the log has been truncated and you choose to use the REPAIR utility rather than recovering to a prior point of consistency, the DSN1COPY RESET option must be used to reset the log RBA in every data and index page set to be corrected with this procedure. v This last step is imperative. When all known inconsistencies have been resolved, full image copies of all modified table spaces must be taken, in order to use the RECOVER utility to recover from any future problems.
625
626
Administration Guide
Chapter 25. Analyzing performance data . . . 645 Investigating the problem overall. . . . . . . 645 Looking at the entire system . . . . . . . 645 Beginning to look at DB2 . . . . . . . . 645 Reading accounting reports from OMEGAMON 646 The accounting report (short format) . . . . 646 The accounting report (long format) . . . . . 647 A general approach to problem analysis in DB2 652 Chapter 26. Improving response time and throughput . . . . . . . . . . . . . . Reducing I/O operations . . . . . . . . . Using RUNSTATS to keep access path statistics current . . . . . . . . . . . . . . Reserving free space in table spaces and indexes Specifying free space on pages . . . . . Determining pages of free space . . . . . Recommendations for allocating free space Making buffer pools large enough for the workload . . . . . . . . . . . . . . Reducing the time needed to perform I/O operations . . . . . . . . . . . . . . Distributing data sets efficiently . . . . . . Putting frequently used data sets on fast devices . . . . . . . . . . . . . Distributing the I/O . . . . . . . . . Creating additional work file table spaces . . . Managing space for I/O performance . . . . Formatting early and speed up formatting Avoiding excessive extents . . . . . . . Reducing processor resource consumption. . . . Reusing threads for your high-volume transactions . . . . . . . . . . . . . Minimizing the use of DB2 traces . . . . . Global trace . . . . . . . . . . . .
Copyright IBM Corp. 1982, 2009
| |
|
660 660 660 660 661 663 664 664 665 665 666 666 666
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools . . . . . . . . . . . . . . Tuning database buffer pools . . . . . . . . Terminology: Types of buffer pool pages . . . Read operations . . . . . . . . . . . Write operations . . . . . . . . . . . Assigning a table space or index to a buffer pool . . . . . . . . . . . . . . . Assigning data to default buffer pools . . . Assigning data to particular buffer pools . . Buffer pool thresholds . . . . . . . . . Fixed thresholds . . . . . . . . . . Thresholds you can change . . . . . . . Guidelines for setting buffer pool thresholds Determining size and number of buffer pools Buffer pool sizes . . . . . . . . . . The buffer-pool hit ratio . . . . . . . . Buffer pool size guidelines . . . . . . . Advantages of large buffer pools . . . . . Choosing one or many buffer pools . . . . Choosing a page-stealing algorithm . . . . . Long-term page fix option for buffer pools . . Monitoring and tuning buffer pools using online commands . . . . . . . . . . . . . Using OMEGAMON to monitor buffer pool statistics . . . . . . . . . . . . . . Tuning EDM storage . . . . . . . . . . . EDM storage space handling . . . . . . . Implications for database design . . . . . Monitoring and EDM storage . . . . . . Tips for managing EDM storage . . . . . . Use packages . . . . . . . . . . . Use RELEASE(COMMIT) when appropriate Be aware of large DBDs . . . . . . . . Understand the impact of using DEGREE(ANY) . . . . . . . . . . . Increasing RID pool size. . . . . . . . . . Controlling sort pool size and sort processing . . Estimating the maximum size of the sort pool How sort work files are allocated. . . . . . Improving the performance of sort processing Chapter 28. Improving resource utilization. Managing the opening and closing of data sets Determining the maximum number of open data sets . . . . . . . . . . . . How DB2 determines DSMAX. . . . Modifying DSMAX . . . . . . . Recommendations . . . . . . . .
671 671 672 672 672 673 673 673 673 674 674 676 677 677 677 678 679 679 680 681 681 683 685 686 686 687 688 688 689 689 689 689 690 690 690 691
627
Understanding the CLOSE YES and CLOSE NO options . . . . . . . . . . . . . . The process of closing . . . . . . . . When the data sets are closed . . . . . . Switching to read-only for infrequently updated and infrequently accessed page sets . . . . . Planning the placement of DB2 data sets . . . . Estimating concurrent I/O requests . . . . . Crucial DB2 data sets. . . . . . . . . . Changing catalog and directory size and location . . . . . . . . . . . . . . Monitoring I/O activity of data sets . . . . . Work file data sets. . . . . . . . . . . DB2 logging . . . . . . . . . . . . . . Logging performance issues and recommendations . . . . . . . . . . . Log writes . . . . . . . . . . . . Log reads . . . . . . . . . . . . Log capacity. . . . . . . . . . . . . Total capacity and the number of logs . . . Choosing a checkpoint frequency. . . . . Calculating average log record size . . . . Increasing the number of active log data sets | Tips on setting the size of active log data sets Controlling the amount of log data . . . . . Utilities . . . . . . . . . . . . . SQL . . . . . . . . . . . . . . Improving disk utilization: space and device utilization . . . . . . . . . . . . . . Allocating and extending data sets . . . . . Compressing your data . . . . . . . . . Deciding whether to compress. . . . . . Tuning recommendation. . . . . . . . Determining the effectiveness of compression Evaluating your indexes . . . . . . . . . | Eliminating unnecessary partitioning indexes | Dropping indexes created to avoid sorts . . | Using nonpadded indexes . . . . . . . | Improving real storage utilization . . . . . . Performance and storage . . . . . . . . . Real storage . . . . . . . . . . . . . Storage devices . . . . . . . . . . . . | Planning considerations for storage device | types . . . . . . . . . . . . . . | Storage servers . . . . . . . . . . . | Older storage device types . . . . . . . z/OS performance options for DB2 . . . . . . Determining z/OS workload management velocity goals . . . . . . . . . . . . How DB2 assigns I/O priorities . . . . . . IBM System z9 Integrated Information Processor # usage monitoring . . . . . . . . . . . # Controlling resource usage . . . . . . . . . Prioritizing resources . . . . . . . . . . Limiting resources for each job . . . . . . Limiting resources for TSO sessions . . . . . Limiting resources for IMS and CICS . . . . Limiting resources for a stored procedure . . . Resource limit facility (governor) . . . . . . . Using resource limit tables (RLSTs) . . . . . Creating an RLST . . . . . . . . . .
696 696 696 697 698 698 698 699 699 699 700 700 700 702 703 704 704 704 704 704 705 705 706 707 707 708 708 710 710 711 711 711 712 712 714 714 715 715 715 716 717 717 718 719 720 721 721 721 722 722 722 723 723
Descriptions of the RLST columns . . . . Governing dynamic queries . . . . . . . Qualifying rows in the RLST . . . . . . Predictive governing . . . . . . . . . Combining reactive and predictive governing Governing statements from a remote site . . Calculating service units. . . . . . . . Restricting bind operations . . . . . . . . Example . . . . . . . . . . . . . Restricting parallelism modes . . . . . . . Chapter 29. Managing DB2 threads . . . . . Setting thread limits . . . . . . . . . . . Allied thread allocation . . . . . . . . . . Step 1: Thread creation . . . . . . . . . Step 2: Resource allocation . . . . . . . . Step 3: SQL statement execution . . . . . . Step 4: Commit and thread termination. . . . Variations on thread management . . . . . TSO and call attachment facility differences Thread management for Resource Recovery Services attachment facility (RRSAF) . . . Differences for SQL under DB2 QMF . . . Providing for thread reuse . . . . . . . . Bind options for thread reuse . . . . . . Using reports to tell when threads were reused . . . . . . . . . . . . . . Database access threads . . . . . . . . . . Using allied threads and database access threads Setting thread limits for database access threads Using threads in INACTIVE MODE for DRDA-only connections . . . . . . . . . Understanding the advantages of database access threads in INACTIVE MODE . . . . Enabling threads to be pooled . . . . . . Timing out idle active threads . . . . . . Using threads with private-protocol connections Reusing threads for remote connections . . . Using z/OS workload management to set performance objectives . . . . . . . . . Classifying DDF threads. . . . . . . . Establishing performance periods for DDF threads . . . . . . . . . . . . . Basic procedure for establishing performance objectives. . . . . . . . . . . . . CICS design options . . . . . . . . . . . IMS design options . . . . . . . . . . . TSO design options . . . . . . . . . . . DB2 QMF design options . . . . . . . . . Chapter 30. Tuning your queries . . . . . . General tips and questions . . . . . . . . . Is the query coded as simply as possible? . . . Are all predicates coded correctly? . . . . . Are there subqueries in your query?. . . . . Does your query involve aggregate functions? Do you have an input variable in the predicate of an SQL query? . . . . . . . . . . . Do you have a problem with column correlation? . . . . . . . . . . . . .
725 728 728 730 731 731 732 732 733 733 735 735 736 736 737 737 738 739 739 739 739 739 740 740 741 741 741 742 743 743 743 744 745 745 745 747 748 748 748 749 750 751 751 751 751 752 753 754 754
| | | | | | | |
628
Administration Guide
| | |
| |
| | | | |
| |
Can your query be written to use a noncolumn expression? . . . . . . . . . . . . . Can materialized query tables help your query performance? . . . . . . . . . . . . Does the query contain encrypted data? . . . Writing efficient predicates . . . . . . . . . Properties of predicates . . . . . . . . . Predicate types . . . . . . . . . . . Indexable and nonindexable predicates . . . Stage 1 and stage 2 predicates . . . . . . Boolean term (BT) predicates . . . . . . Predicates in the ON clause . . . . . . . General rules about predicate evaluation . . . . Order of evaluating predicates. . . . . . . Summary of predicate processing. . . . . . Examples of predicate properties . . . . . . Predicate filter factors . . . . . . . . . Default filter factors for simple predicates Filter factors for uniform distributions . . . Interpolation formulas . . . . . . . . Filter factors for all distributions . . . . . Using multiple filter factors to determine the cost of a query . . . . . . . . . . . Column correlation . . . . . . . . . . How to detect column correlation . . . . Impacts of column correlation . . . . . . What to do about column correlation . . . DB2 predicate manipulation . . . . . . . Predicate modifications for IN-list predicates When DB2 simplifies join operations . . . Predicates generated through transitive closure . . . . . . . . . . . . . Predicates with encrypted data . . . . . . Using host variables efficiently . . . . . . . Changing the access path at run time . . . . The REOPT(ALWAYS) bind option . . . . The REOPT(ONCE) bind option . . . . . The REOPT(NONE) bind option . . . . . Rewriting queries to influence access path selection . . . . . . . . . . . . . . Writing efficient subqueries. . . . . . . . . Correlated subqueries . . . . . . . . . Noncorrelated subqueries . . . . . . . . Single-value subqueries . . . . . . . . Multiple-value subqueries . . . . . . . Conditions for DB2 to transform a subquery into a join . . . . . . . . . . . . . Subquery tuning . . . . . . . . . . . Using scrollable cursors efficiently . . . . . . Writing efficient queries on tables with data-partitioned secondary indexes . . . . . . Special techniques to influence access path selection . . . . . . . . . . . . . . . Obtaining information about access paths . . . Fetching a limited number of rows: FETCH FIRST n ROWS ONLY . . . . . . . . . Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS . . . . . . . . Favoring index access . . . . . . . . . Using a subsystem parameter to control outer join processing . . . . . . . . . . . .
754 754 755 755 755 756 757 757 758 758 759 759 760 765 766 766 767 768 769 771 772 772 772 774 775 775 775 777 779 779 779 780 781 782 782 785 785 786 787 787 788 790 790 792 794 794 795 795 798 798
| | |
| |
Using the CARDINALITY clause to improve the performance of queries with user-defined table function references . . . . . . . . . . Reducing the number of matching columns . . Creating indexes for efficient star-join processing . . . . . . . . . . . . . Recommendations for creating indexes for star-join queries . . . . . . . . . . Determining the order of columns in an index for a star schema design . . . . . Rearranging the order of tables in a FROM clause . . . . . . . . . . . . . . . Updating catalog statistics . . . . . . . . Using a subsystem parameter . . . . . . . Using a subsystem parameter to favor matching index access . . . . . . . . Using a subsystem parameter to optimize queries with IN-list predicates . . . . . . Giving optimization hints to DB2. . . . . . Planning to use optimization hints . . . . Enabling optimization hints for the subsystem . . . . . . . . . . . . Scenario: Preventing a change at rebind . . Scenario: Modifying an existing access path Reasons to use the QUERYNO clause . . . How DB2 locates the PLAN_TABLE rows for a hint . . . . . . . . . . . . . . How DB2 validates the hint . . . . . . Limitations on optimization hints. . . . . Chapter 31. Improving concurrency . . . . Definitions of concurrency and locks . . . . Effects of DB2 locks . . . . . . . . . . Suspension . . . . . . . . . . . . Timeout . . . . . . . . . . . . . Deadlock . . . . . . . . . . . . . Basic recommendations to promote concurrency Recommendations for system and subsystem options . . . . . . . . . . . . . Recommendations for database design . . . Recommendations for application design . . Aspects of transaction locks . . . . . . . The size of a lock . . . . . . . . . . Definition . . . . . . . . . . . Hierarchy of lock sizes . . . . . . . General effects of size . . . . . . . Effects of table spaces of different types . Differences between simple and segmented table spaces . . . . . . . . . . . The duration of a lock . . . . . . . . Effects . . . . . . . . . . . . . The mode of a lock . . . . . . . . . Modes of page and row locks . . . . . Modes of table, partition, and table space locks . . . . . . . . . . . . . Lock mode compatibility . . . . . . The object of a lock . . . . . . . . . Definition and examples. . . . . . . Indexes and data-only locking . . . . . Locks on the DB2 catalog . . . . . . . . . . . .
798 799 801 801 802 803 804 805 805 806 806 806 807 807 809 810 810 810 812 813 814 814 814 815 815 816 816 817 818 821 821 821 821 822 822 823 824 824 825 825 826 827 827 827 828 828
. . . . . . . . . . . . . . . . . . . .
629
Locks on the skeleton tables (SKCT and SKPT) . . . . . . . . . . . . . . Locks on the database descriptors (DBDs) DB2 choices of lock types . . . . . . . . Modes of locks acquired for SQL statements Lock promotion . . . . . . . . . . Lock escalation . . . . . . . . . . . Modes of transaction locks for various processes . . . . . . . . . . . . . Options for tuning locks. . . . . . . . . . IRLM startup procedure options . . . . . . Estimating the storage needed for locks . . Installation options for wait times . . . . . DEADLOCK TIME on installation panel DSNTIPJ . . . . . . . . . . . . . RESOURCE TIMEOUT on installation panel DSNTIPI . . . . . . . . . . . . . Wait time for transaction locks . . . . . IDLE THREAD TIMEOUT on installation panel DSNTIPR . . . . . . . . . . UTILITY TIMEOUT on installation panel DSNTIPI . . . . . . . . . . . . . Wait time for drains . . . . . . . . . Other options that affect locking . . . . . . LOCKS PER USER field of installation panel DSNTIPJ . . . . . . . . . . . . . LOCKSIZE clause of CREATE and ALTER TABLESPACE . . . . . . . . . . . LOCKMAX clause of CREATE and ALTER TABLESPACE . . . . . . . . . . . LOCKS PER TABLE(SPACE) field of installation panel DSNTIPJ . . . . . . . The option U LOCK FOR RR/RS . . . . . Option to release locks for cursors defined WITH HOLD . . . . . . . . . . . Option XLOCK for searched updates/deletes Option to avoid locks during predicate evaluation . . . . . . . . . . . . Option to disregard uncommitted inserts . . Bind options. . . . . . . . . . . . . The ACQUIRE and RELEASE options . . . Advantages and disadvantages of the combinations . . . . . . . . . . . The ISOLATION option . . . . . . . . Advantages and disadvantages of the isolation values. . . . . . . . . . . The CURRENTDATA option . . . . . . When plan and package options differ . . . The effect of WITH HOLD for a cursor . . . Isolation overriding with SQL statements . . . The LOCK TABLE statement . . . . . . . The purpose of LOCK TABLE . . . . . . When to use LOCK TABLE. . . . . . . The effect of LOCK TABLE . . . . . . . LOB locks . . . . . . . . . . . . . . Relationship between transaction locks and LOB locks . . . . . . . . . . . . . . . Hierarchy of LOB locks . . . . . . . . . LOB and LOB table space lock modes . . . . Modes of LOB locks . . . . . . . . . Modes of LOB table space locks . . . . .
829 829 830 830 833 833 835 836 836 837 837 837 837 837 839 840 840 841 841 842 843 844 844 845 845 846 846 847 847 849 850 852 857 860 861 861 863 863 863 863 864 864 865 865 866 866
LOB lock and LOB table space lock duration The duration of LOB locks . . . . . . . The duration of LOB table space locks . . . Instances when LOB table space locks are not taken . . . . . . . . . . . . . . . Control of the number of LOB locks . . . . . Controlling the number of LOB locks that are acquired for a user . . . . . . . . . Controlling LOB lock escalation . . . . . The LOCK TABLE statement for LOBs . . . . LOCKSIZE clause for LOB table spaces. . . . Claims and drains for concurrency control . . . Objects that are subject to takeover . . . . . Claims . . . . . . . . . . . . . . Drains . . . . . . . . . . . . . . . Usage of drain locks . . . . . . . . . . Utility locks on the catalog and directory . . . Compatibility of utilities. . . . . . . . . Concurrency during REORG . . . . . . . Utility operations with nonpartitioned indexes Monitoring of DB2 locking . . . . . . . . . Using EXPLAIN to tell which locks DB2 chooses Using the statistics and accounting traces to monitor locking . . . . . . . . . . . Scenario for analyzing concurrency . . . . . Scenario description . . . . . . . . . Accounting report . . . . . . . . . . Lock suspension report . . . . . . . . Lockout report . . . . . . . . . . . Lockout trace . . . . . . . . . . . Corrective decisions . . . . . . . . . OMEGAMON online locking conflict display Deadlock detection scenarios . . . . . . . . Scenario 1: Two-way deadlock with two resources . . . . . . . . . . . . . . Scenario 2: Three-way deadlock with three resources . . . . . . . . . . . . . . Chapter 32. Using materialized query tables Introduction to materialized query tables and automatic query rewrite . . . . . . . . . . Example of automatic query rewrite using a materialized query table . . . . . . . . . Steps for using automatic query rewrite . . . Defining a materialized query table . . . . . . Creating a new materialized query table . . . Registering an existing table as a materialized query table . . . . . . . . . . . . . Altering a materialized query table . . . . . Populating and maintaining materialized query tables . . . . . . . . . . . . . . . . Using the REFRESH TABLE statement . . . . Using INSERT, UPDATE, DELETE, and LOAD for materialized query tables . . . . . . . Collecting statistics on materialized query tables Using multilevel security with materialized query tables . . . . . . . . . . . . . . . . Creating a materialized query table . . . . . Altering a source table . . . . . . . . . Refreshing a materialized query table . . . . Exploiting automatic query rewrite . . . . . .
866 866 867 867 867 867 867 868 868 868 868 869 869 870 870 871 872 873 873 874 875 876 876 877 878 879 880 880 881 881 881 883 885 885 886 886 887 887 888 890 890 891 891 892 892 892 893 893 893
| | | | | | | | | | | | | | | | | | | | | | |
630
Administration Guide
| Making materialized query tables eligible . . Query requirements and the rewrite process . | Query rewrite analysis . . . . . . . | Examples of automatic query rewrite . . | Determining if query rewrite occurred . . | Recommendations for materialized query table | and base table design. . . . . . . . . | Materialized query table design . . . . | Base table design . . . . . . . . . | | Materialized query table examples shipped with | DB2 . . . . . . . . . . . . . . .
. . . . .
. 901 . 901 . 902 . 902 903 903 910 912 912 913 916 917 917 918 919 919 919 920 920 921 923 924 924 926 927 927 927
Chapter 33. Maintaining statistics in the catalog Statistics used for access path selection . . . . . Filter factors and catalog statistics . . . . . Statistics for partitioned table spaces . . . . Setting default statistics for created temporary tables . . . . . . . . . . . . . . . . History statistics . . . . . . . . . . . . Gathering monitor statistics and update statistics Updating the catalog . . . . . . . . . . . Correlations in the catalog . . . . . . . . Recommendation for COLCARDF and FIRSTKEYCARDF . . . . . . . . . . . Recommendation for HIGH2KEY and LOW2KEY . . . . . . . . . . . . . Statistics for distributions . . . . . . . . Recommendation for using the TIMESTAMP column . . . . . . . . . . . . . . Querying the catalog for statistics . . . . . . Improving index and table space access . . . . How clustering affects access path selection . . What other statistics provide index costs . . . When to reorganize indexes and table spaces Reorganizing Indexes. . . . . . . . . Reorganizing table spaces . . . . . . . Reorganizing LOB table spaces . . . . . Whether to rebind after gathering statistics . . Modeling your production system . . . . . . Chapter 34. Using EXPLAIN to improve SQL performance . . . . . . . . . . . . . Obtaining PLAN_TABLE information from EXPLAIN . . . . . . . . . . . . . . Creating PLAN_TABLE . . . . . . . . . Populating and maintaining a plan table . . . Executing the SQL statement EXPLAIN . . Binding with the option EXPLAIN(YES) . . Executing EXPLAIN under DB2 QMF . . . Maintaining a plan table. . . . . . . . Reordering rows from a plan table . . . . . Retrieving rows for a plan . . . . . . . Retrieving rows for a package . . . . . . Asking questions about data access . . . . . . Is access through an index? (ACCESSTYPE is I, I1, N or MX) . . . . . . . . . . . . Is access through more than one index? (ACCESSTYPE=M) . . . . . . . . . . How many columns of the index are used in matching? (MATCHCOLS=n) . . . . . . .
| | |
931 932 933 940 941 941 941 943 943 943 943 943 944 944 945
Is the query satisfied using only the index? (INDEXONLY=Y) . . . . . . . . . . Is direct row access possible? (PRIMARY_ACCESSTYPE = D) . . . . . Which predicates qualify for direct row access? . . . . . . . . . . . . Reverting to ACCESSTYPE . . . . . . Using direct row access and other access methods . . . . . . . . . . . . Is a view or nested table expression materialized? . . . . . . . . . . . Was a scan limited to certain partitions? (PAGE_RANGE=Y) . . . . . . . . . What kind of prefetching is expected? (PREFETCH = L, S, D, or blank) . . . . . Is data accessed or processed in parallel? (PARALLELISM_MODE is I, C, or X) . . . Are sorts performed? . . . . . . . . . Is a subquery transformed into a join? . . . When are aggregate functions evaluated? (COLUMN_FN_EVAL) . . . . . . . . How many index screening columns are used? Is a complex trigger WHEN clause used? (QBLOCKTYPE=TRIGGR) . . . . . . . Interpreting access to a single table . . . . . Table space scans (ACCESSTYPE=R PREFETCH=S) . . . . . . . . . . . Table space scans of nonsegmented table spaces . . . . . . . . . . . . . Table space scans of segmented table spaces Table space scans of partitioned table spaces Table space scans and sequential prefetch Overview of index access . . . . . . . Using indexes to avoid sorts . . . . . Costs of indexes . . . . . . . . . Index access paths. . . . . . . . . . Matching index scan (MATCHCOLS>0) . Index screening. . . . . . . . . . Nonmatching index scan (ACCESSTYPE=I and MATCHCOLS=0) . . . . . . . IN-list index scan (ACCESSTYPE=N) . . Multiple index access (ACCESSTYPE is M, MX, MI, or MU) . . . . . . . . . One-fetch access (ACCESSTYPE=I1) . . . Index-only access (INDEXONLY=Y) . . . Equal unique index (MATCHCOLS=number of index columns) . . . . . . . . . UPDATE using an index . . . . . . . Interpreting access to two or more tables (join) . Definitions and examples of join operations . Nested loop join (METHOD=1) . . . . . Method of joining . . . . . . . . . Performance considerations. . . . . . When nested loop join is used. . . . . Merge scan join (METHOD=2) . . . . . Method of joining . . . . . . . . . Performance considerations. . . . . . When merge scan join is used . . . . . Hybrid join (METHOD=4) . . . . . . . Method of joining . . . . . . . . .
. 946 . 946 . 946 . 947 . 948 . 948 . 948 . 949 . 949 . 949 . 950 . 950 950 . 950 . 951 . 951 . 951 952 952 952 . 952 . 952 . 953 . 954 . 954 . 955 . 955 . 955 . 956 . 957 . 958 . . . . . . . . . . . . . . 958 958 959 959 961 961 962 962 963 963 964 964 965 966
631
| | |
Possible results from EXPLAIN for hybrid join. . . . . . . . . . . . . . . Performance considerations. . . . . . . When hybrid join is used . . . . . . . Star join (JOIN_TYPE=S) . . . . . . . . Example of a star schema . . . . . . . When star join is used . . . . . . . . Dedicated virtual memory pool for star join operations . . . . . . . . . . . . Interpreting data prefetch . . . . . . . . . Sequential prefetch (PREFETCH=S) . . . . . Dynamic prefetch (PREFETCH=D) . . . . . List prefetch (PREFETCH=L) . . . . . . . The access method . . . . . . . . . When list prefetch is used . . . . . . . Bind time and execution time thresholds . . Sequential detection at execution time . . . . When sequential detection is used . . . . How to tell whether sequential detection was used . . . . . . . . . . . . . . How to tell if sequential detection might be used . . . . . . . . . . . . . . Determining sort activity . . . . . . . . . Sorts of data. . . . . . . . . . . . . Sorts for group by and order by . . . . . Sorts to remove duplicates . . . . . . . Sorts used in join processing . . . . . . Sorts needed for subquery processing . . . Sorts of RIDs . . . . . . . . . . . . The effect of sorts on OPEN CURSOR . . . . Processing for views and nested table expressions Merge . . . . . . . . . . . . . . . Materialization . . . . . . . . . . . . Two steps of materialization . . . . . . When views or table expressions are materialized . . . . . . . . . . . . Using EXPLAIN to determine when materialization occurs . . . . . . . . . Using EXPLAIN to determine UNION activity and query rewrite . . . . . . . . . . . Performance of merge versus materialization Estimating a statements cost . . . . . . . . Creating a statement table . . . . . . . . Populating and maintaining a statement table Retrieving rows from a statement table . . . . The implications of cost categories . . . . .
966 966 966 967 967 968 971 973 973 974 974 974 975 975 976 976 976 976 977 977 977 978 978 978 978 978 979 979 980 980 980 982 983 985 985 986 988 988 989
A method for examining PLAN_TABLE columns for parallelism . . . . . . . . PLAN_TABLE examples showing parallelism Monitoring parallel operations . . . . . . Using DISPLAY BUFFERPOOL . . . . . Using DISPLAY THREAD . . . . . . . Using DB2 trace . . . . . . . . . . Accounting trace . . . . . . . . . Performance trace . . . . . . . . Tuning parallel processing. . . . . . . . Disabling query parallelism . . . . . . .
. 999 999 . 1001 . 1002 . 1002 . 1002 . 1002 . 1003 . 1004 . 1005
| |
Chapter 36. Tuning and monitoring in a distributed environment . . . . . . . . . 1007 Remote access types: DRDA and private protocol 1007 Characteristics of DRDA . . . . . . . . 1007 Characteristics of DB2 private protocol . . . 1007 Tuning distributed applications . . . . . . . 1008 Application and requesting systems . . . . 1008 BIND options . . . . . . . . . . . 1008 SQL statement options . . . . . . . . 1008 Block fetching result sets . . . . . . . 1009 Optimizing for very large results sets for DRDA . . . . . . . . . . . . . 1014 Optimizing for small results sets for DRDA 1015 Data encryption security options . . . . 1016 Serving system . . . . . . . . . . . 1016 Monitoring DB2 in a distributed environment 1017 The DISPLAY command . . . . . . . . 1017 Tracing distributed events . . . . . . . . 1017 Reporting server-elapsed time . . . . . . 1021 Monitoring distributed processing with RMF . . 1021 Duration of an enclave . . . . . . . . . 1021 RMF records for enclaves . . . . . . . . 1022 Chapter 37. Monitoring and tuning stored procedures and user-defined functions . . . 1023 Controlling address space storage . . . . . . 1023 Assigning procedures and functions to WLM application environments . . . . . . . . . 1024 Providing DB2 cost information for accessing user-defined table functions . . . . . . . . 1025 Monitoring stored procedures with the accounting trace . . . . . . . . . . . . . . . . 1026 Accounting for nested activities . . . . . . . 1028 Comparing the types of stored procedure address spaces . . . . . . . . . . . . . . . 1029
Chapter 35. Parallel operations and query performance . . . . . . . . . . . . . 991 Comparing the methods of parallelism . . . . . 992 Partitioning for optimal parallel performance . . . 994 Determining if a query is I/O- or processor-intensive . . . . . . . . . . 995 Determining the number of partitions . . . . 995 Working with a table space that is already partitioned . . . . . . . . . . . . . 996 Making the partitions the same size . . . . . 996 Working with partitioned indexes . . . . . 997 Enabling parallel processing . . . . . . . . 997 When parallelism is not used . . . . . . . . 998 Interpreting EXPLAIN output . . . . . . . . 999
632
Administration Guide
| | | |
633
Periodically, or after significant changes to your system or workload, return to step 1, reexamine your objectives, and refine your monitoring and tuning strategy accordingly.
634
Administration Guide
queries, transactions, utilities, and batch jobs. For the volume of a workload that is already processed by DB2, use the summary of its volumes from the DB2 statistics trace. v The relative priority of the type, including periods during which the priorities change. v The resources that are required to do the work, including physical resources that are managed by the operating system (such as real storage, disk I/O, and terminal I/O) and logical resources managed by the subsystem (such as control blocks and buffers). Before installing DB2, gather design data during the phases of initial planning, external design, internal design, and coding and testing. Keep reevaluating your performance objectives with that information.
| | | | | |
635
| | | | |
External design
| | During the external design phase, you must: 1. Estimate the network, Web server, application server, processor, and disk subsystem workload. 2. Refine your estimates of logical disk accesses. Ignore physical accesses at this stage; one of the major difficulties will be determining the number of I/Os per statement.
Internal design
During the internal design phase, you must: 1. Refine your estimated workload against the actual workload. 2. Refine disk access estimates against database design. After internal design, you can define physical data accesses for application-oriented processes and estimate buffer hit ratios. 3. Add the accesses for DB2 work file database, DB2 log, program library, and DB2 sorts. 4. Consider whether additional processor loads will cause a significant constraint. 5. Refine estimates of processor usage.
636
Administration Guide
6. Estimate the internal response time as the sum of processor time and synchronous I/O time or as asynchronous I/O time, whichever is larger. 7. Prototype your DB2 system. Before committing resources to writing code, you can create a small database, update the statistics stored in the DB2 catalog tables, run SQL statements and examine the results. 8. Use DB2 estimation formulas to develop estimates for processor resource consumption and I/O costs for application processes that are high volume or complex.
Post-development review
When you are ready to test the complete system, review its performance in detail. Take the following steps to complete your performance review: 1. Validate system performance and response times against the objectives. 2. Identify resources whose usage requires regular monitoring. 3. Incorporate the observed figures into future estimates. This step requires: a. Identifying discrepancies from the estimated resource usage b. Identifying the cause of the discrepancies c. Assigning priorities to remedial actions d. Identifying resources that are consistently heavily used e. Setting up utilities to provide graphic representation of those resources f. Projecting the processor usage against the planned future system growth to ensure that adequate capacity will be available g. Updating the design document with the observed performance figures h. Modifying the estimation procedures to reflect what you have learned You need feedback from users and might have to solicit it. Establish reporting procedures and teach your users how to use them. Consider logging incidents such as these: v System, line and transaction or query failures v System unavailable time v Response times that are outside the specified limits v Incidents that imply performance constraints, such as deadlocks, deadlock abends, and insufficient storage v Situations, such as recoveries, that use additional system resources The data logged should include the time, date, location, duration, cause (if it can be determined), and the action taken to resolve the problem.
637
v A master schedule of monitoring. Large batch jobs or utility runs can cause activity peaks. Coordinate monitoring with other operations so that it need not conflict with unusual peaks, unless that is what you want to monitor. v The kinds of analysis to be performed and the tools to be used. Document the data that is extracted from the monitoring output. Some of the performance reports are derived from the products that are described in Appendix F, Using tools to monitor performance, on page 1191. These reports can be produced using Tivoli Decision Support for OS/390, OMEGAMON XE for DB2 Performance Monitor (DB2 PM), OMEGAMON (which includes the function of DB2 PM), other reporting tools, manual reduction, or a program of your own that extracts information from standard reports. v A list of people who should review the results. The results of monitoring and the conclusions based on them should be available to the user support group and to system performance specialists. v A strategy for tuning DB2. Describe how often changes are permitted and standards for testing their effects. Include the tuning strategy in regular system management procedures. Tuning recommendations could include generic database and application design changes. You should update development standards and guidelines to reflect your experience and to avoid repeating mistakes. Typically, your plan provides for four levels of monitoring: v Continuous performance monitoring v Periodic performance monitoring on page 639 v Detailed performance monitoring on page 639 v Exception performance monitoring on page 640 A performance monitoring strategy on page 640 describes a plan that includes all of these levels.
| | |
638
Administration Guide
| | | | | | | | | |
need the information. OMEGAMON includes a performance warehouse that allows you to define, schedule, and run processes that: v Automate the creation of reports v Automate the conversion and loading of these reports and other data into a performance warehouse v Analyze the data that is loaded into the performance warehouse using suggested rules or rules that you define yourself The data in the performance warehouse can be accessed by any member of the DB2 Universal Database family or by any product that supports Distributed Relational Database Architecture (DRDA).
639
If you believe that the problem is caused by the choice of system parameters, I/O device assignments, or other factors, begin monitoring DB2 to collect data about its internal activity. Appendix F, Using tools to monitor performance, on page 1191 suggests various techniques and methods. If you have access path problems, refer to Chapter 34, Using EXPLAIN to improve SQL performance, on page 931 for information.
640
Administration Guide
example, save the report from the last week of a month for three months; at the end of the year, discard all but the last week of each quarter. Similarly, keep a representative selection of daily and monthly figures. Because of the potential volume of data, consider using Tivoli Decision Support for OS/390, Application Monitor for z/OS and OS/390, or a similar tool to track historical data in a manageable form.
| | |
| | |
641
To what degree was disk used? Is the number of I/O requests increasing? DB2 records both physical and logical requests. The number of physical I/Os depend on the configuration of indexes, the data records per control interval, and the buffer allocations. See Monitoring system resources on page 1194 and Statistics trace on page 1196. To what extent were DB2 log resources used? 1. Is the log subject to undue contention from other data sets? In particular, is the log on the same drive as any resource whose updates are logged? Recommendation: Do not put a recoverable (updated) resource and a log on the same drive. If that drive fails, you lose both the resource and the log, and you are unable to carry out forward recovery. | | 2. What is the I/O rate for requests and physical blocks on the log? What is the logging rate for one log in MB per second? See Statistics trace on page 1196. Do any figures indicate design, coding, or operational errors? 1. Are disk, I/O, log, or processor resources heavily used? If so, was that heavy use expected at design time? If not, can the heavy use be explained in terms of heavier use of workloads? 2. Is the heavy usage associated with a particular application? If so, is there evidence of planned growth or peak periods? 3. What are your needs for concurrent read/write and query activity? 4. How often do locking contentions occur? 5. Are there any disk, channel, or path problems? 6. Are there any abends or dumps? See Monitoring system resources on page 1194, Statistics trace on page 1196, and Accounting trace on page 1196. | | | | | | | What are the effects of DB2 locks? 1. What are the incidents of deadlocks and timeouts? 2. What percentage of elapsed time is due to lock suspensions? How much lock or latch contention was encountered? Check the contention rate per second by class. 3. How effective is lock avoidance? See Monitoring of DB2 locking on page 873 Were there any bottlenecks? 1. Were any critical thresholds reached? 2. Are any resources approaching high utilization? See Monitoring system resources on page 1194 and Accounting trace on page 1196.
642
Administration Guide
the application, upgrade your system, or plan a reduced application workload. If the difference is not too large, however, you can begin tuning the entire system.
643
644
Administration Guide
| | |
645
v Real storage constraints. Applications progress more slowly than expected because of paging interrupts. The constraints show as delays between successive requests recorded in the DB2 trace. v Contention for a particular function. For example, there might be a wait on a particular data set, or a certain application might cause many application processes to put the same item in their queues. Use the DB2 performance trace to distinguish most of these cases. | | | | For information about packages or DBRMs, run accounting trace classes 7 and 8. To determine which packages are consuming excessive resources, compare accounting trace classes 7 and 8 to the elapsed time for the whole plan on accounting classes 1 and 2. A number greater than 1 in the QXMAXDEG field of the accounting trace indicates that parallelism was used. There are special considerations for interpreting such records, as described in Monitoring parallel operations on page 1001. The easiest way to read and interpret the trace data is through the reports produced by OMEGAMON. If you do not have OMEGAMON or an equivalent program, refer to Appendix D, Interpreting DB2 trace output, on page 1139 for information about the format of data from DB2 traces. You can also use the tools for performance measurement described in Appendix F, Using tools to monitor performance, on page 1191 to diagnose system problems. Also see Appendix F, Using tools to monitor performance, on page 1191 for information on analyzing the DB2 catalog and directory.
646
Administration Guide
To get an overall picture of the system work load, you can use the OMEGAMON GROUP command to group several DB2 plans together. You can use the accounting report, short format, to: v Monitor the effect of each application or group on the total work load v Monitor, in each application or group: DB2 response time (elapsed time) Resources used (processor, I/Os) Lock suspensions Application changes (SQL used) Usage of packages and DBRMs Processor, I/O wait, and lock wait time for each package An accounting report in the short format can list results in order by package. Thus you can summarize package or DBRM activity independently of the plan under which the package or DBRM executed. Only class 1 of the accounting trace is needed for a report of information by plan. Classes 2 and 3 are recommended for additional information. Classes 7 and 8 are needed to give information by package or DBRM.
647
AVERAGE -----------ELAPSED TIME NONNESTED STORED PROC UDF TRIGGER CPU TIME AGENT NONNESTED STORED PRC UDF TRIGGER PAR.TASKS SUSPEND TIME AGENT PAR.TASKS STORED PROC UDF
IFI (CL.5) ---------N/P N/A N/A N/A N/A N/P N/A N/P N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/P N/P
0.026978 C 0.018994 0.026978 0.018994 0.026978 0.018994 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 N/A N/A 0.000000 0.000000 N/A N/A N/A N/A N/A N/A 0.010444 0.010444 0.000000 N/A N/A 0.000004 182.00 0.00 0.00 N/A N/A
CLASS 3 SUSPENSIONS AVERAGE TIME AV.EVENT -------------------- ------------ -------LOCK/LATCH(DB2+IRLM) A 0.000011 0.04 SYNCHRON. I/O B 0.010170 9.16 DATABASE I/O 0.006325 8.16 LOG WRITE I/O 0.003845 1.00 OTHER READ I/O D 0.000000 0.00 OTHER WRTE I/O E 0.000148 0.04 SER.TASK SWTCH F 0.000115 0.04 UPDATE COMMIT 0.000000 0.00 OPEN/CLOSE 0.000000 0.00 SYSLGRNG REC 0.000115 0.04 EXT/DEL/DEF 0.000000 0.00 OTHER SERVICE 0.000000 0.00 ARC.LOG(QUIES) G 0.000000 0.00 ARC.LOG READ H 0.000000 0.00 DRAIN LOCK I 0.000000 0.00 CLAIM RELEASE J 0.000000 0.00 PAGE LATCH K 0.000000 0.00 NOTIFY MSGS 0.000000 0.00 GLOBAL CONTENTION 0.000000 0.00 COMMIT PH1 WRITE I/O 0.000000 0.00 ASYNCH CF REQUESTS 0.000000 0.00 TOTAL CLASS 3 0.010444 9.28
HIGHLIGHTS -------------------------#OCCURRENCES : 193 #ALLIEDS : 0 #ALLIEDS DISTRIB: 0 #DBATS : 193 #DBATS DISTRIB. : 0 #NO PROGRAM DATA: 0 #NORMAL TERMINAT: 193 #ABNORMAL TERMIN: 0 #CP/X PARALLEL. : 0 #IO PARALLELISM : 0 #INCREMENT. BIND: 0 #COMMITS : 193 #ROLLBACKS : 0 #SVPT REQUESTS : 0 #SVPT RELEASE : 0 #SVPT ROLLBACK : 0 MAX SQL CASC LVL: 0 UPDATE/COMMIT : 40.00 SYNCH I/O AVG. : 0.001110
SQL DML AVERAGE TOTAL -------- -------- -------SELECT 20.00 3860 INSERT 0.00 0 UPDATE 30.00 5790 DELETE 10.00 1930 DESCRIBE DESC.TBL PREPARE OPEN FETCH CLOSE DML-ALL 0.00 0.00 0.00 10.00 10.00 10.00 90.00 0 0 0 1930 1930 1930 17370
SQL DCL TOTAL -------------- -------LOCK TABLE 0 GRANT 0 REVOKE 0 SET CURR.SQLID 0 SET HOST VAR. 0 SET CUR.DEGREE 0 SET RULES 0 SET CURR.PATH 0 SET CURR.PREC. 0 CONNECT TYPE 1 0 CONNECT TYPE 2 0 SET CONNECTION 0 RELEASE 0 CALL 0 ASSOC LOCATORS 0 ALLOC CURSOR 0 HOLD LOCATOR 0 FREE LOCATOR 0 DCL-ALL 0
SQL DDL CREATE DROP ALTER ---------- ------ ------ -----TABLE 0 0 0 CRT TTABLE 0 N/A N/A DCL TTABLE 0 N/A N/A AUX TABLE 0 N/A N/A INDEX 0 0 0 TABLESPACE 0 0 0 DATABASE 0 0 0 STOGROUP 0 0 0 SYNONYM 0 0 N/A VIEW 0 0 N/A ALIAS 0 0 N/A PACKAGE N/A 0 N/A PROCEDURE 0 0 0 FUNCTION 0 0 0 TRIGGER 0 0 N/A DIST TYPE 0 0 N/A SEQUENCE 0 0 0 TOTAL RENAME TBL COMMENT ON LABEL ON 0 0 0 0 0 0
LOCKING AVERAGE TOTAL ---------------------- -------- -------TIMEOUTS 0.00 0 DEADLOCKS 0.00 0 ESCAL.(SHARED) 0.00 0 ESCAL.(EXCLUS) 0.00 0 MAX PG/ROW LOCKS HELD 43.34 47 LOCK REQUEST 63.82 12318 UNLOCK REQUEST 14.48 2794 QUERY REQUEST 0.00 0 CHANGE REQUEST 33.35 6436 OTHER REQUEST 0.00 0 LOCK SUSPENSIONS 0.00 0 IRLM LATCH SUSPENSIONS 0.03 5 OTHER SUSPENSIONS 0.00 0 TOTAL SUSPENSIONS 0.03 5
In analyzing a detailed accounting report, consider the following components of response time. (Fields of the report that are referred to are labeled in Figure 69.) | | Class 1 elapsed time: Compare this with the CICS, IMS, WebSphere, or distributed application transit times: v In CICS, you can use CICS Performance Analyzer to find the attach and detach times; use this time as the transit time. v In IMS, use the PROGRAM EXECUTION time reported in IMS Performance Analyzer. Differences between the CICS, IMS, WebSphere, or distributed application times and the DB2 accounting times arise mainly because the DB2 times do not include: | | v Time before the first SQL statement, which for a distributed application includes the inbound network delivery time v DB2 create thread v DB2 terminate thread
648
Administration Guide
| |
v For a distributed application, the time to deliver the response to a commit if database access thread pooling is used Differences can also arise from thread reuse in CICS, IMS, WebSphere, or distributed application processing or through multiple commits in CICS. If the class 1 elapsed time is significantly less than the CICS or IMS time, check the report from Tivoli Decision Support for OS/390, IMS Performance Analyzer, or an equivalent reporting tool to find out why. Elapsed time can occur: v In DB2, during sign-on, create, or terminate thread v Outside DB2, during CICS or IMS processing Not-in-DB2 time: This is time calculated as the difference between the class 1 and the class 2 elapsed time. It is time spent outside of DB2, but within the DB2 accounting interval. A lengthy time can be caused by thread reuse, which can increase class 1 elapsed time, or a problem in the application program, CICS, IMS, or the overall system. For a distributed application, not-in-DB2 time is calculated with this formula:
Not_in_DB2 time = A - (B + C + (D - E))
Where the variables have the following values: A B C D E | | | | | | Class 1 elapsed time Class 2 non-tested elapsed time Class 1 non-tested time of any stored procedures, user-defined functions, or triggers Class 1 non-tested CPU time Class 2 non-tested CPU time
The calculated not-in-DB2 time might be zero. Furthermore, this time calculation is only an estimate. A primary factor that is not included in the equation is the amount of time that requests wait for CPU resources while executing within the DDF address space. To determine how long requests wait for CPU resources, look at the NOT ACCOUNT field. The NOT ACCOUNT field shows the time that requests wait for CPU resources while a distributed task is inside DB2. Lock/latch suspension time: This shows contention for DB2 and IRLM resources. If contention is high, check the locking summary section of the report, and then proceed with the locking reports. For more information, see Scenario for analyzing concurrency on page 876. In the OMEGAMON accounting report, see the field LOCK/LATCH(DB2+IRLM) ( A ). Synchronous I/O suspension time: This is the total application wait time for synchronous I/Os. It is the total of database I/O and log write I/O. In the OMEGAMON accounting report, check the number reported for SYNCHRON. I/O ( B ). If the number of synchronous read or write I/Os is higher than expected, check for: v A change in the access path to data. If you have data from accounting trace class 8, the number of synchronous and asynchronous read I/Os is available for individual packages. Determine which package or packages have unacceptable
Chapter 25. Analyzing performance data
649
| | | | | | | |
v v
counts for synchronous and asynchronous read I/Os. Activate the necessary performance trace classes for the OMEGAMON SQL activity reports to identify the SQL statement or cursor that is causing the problem. If you suspect that your application has an access path problem, see Chapter 34, Using EXPLAIN to improve SQL performance, on page 931. A lower than expected buffer pool hit ratio. You can improve the ratio by tuning the buffer pool. Look at the number of synchronous reads in the buffer pool that are associated with the plan, and look at the related buffer pool hit ratio. If the buffer pool size and the buffer pool hit ratio for random reads is small, consider increasing the buffer pool size. By increasing the buffer pool size, you might reduce the amount of synchronous database I/O and reduce the synchronous I/O suspension time. A change to the catalog statistics, or statistics that no longer match the data. Changes in the application. Check the SQL ACTIVITY section and compare with previous data. There might have been some inserts that changed the amount of data. Also, check the names of the packages or DBRMs being executed to determine if the pattern of programs being executed has changed. Pages might be out of order so that sequential detection is not used, or data might have been moved to other pages. Run the REORG utility in these situations.
v A system-wide problem in the database buffer pool. Refer to Using OMEGAMON to monitor buffer pool statistics on page 683. You can also use OMEGAMON Buffer Pool Analyzer User's Guide or OMEGAMON, which contains the function of DB2 BPA, to manage and optimize buffer pool activity. v A RID pool failure. Refer to Increasing RID pool size on page 689. v A system-wide problem in the EDM pool. Refer to Tuning EDM storage on page 685. If I/O time is greater than expected, and not caused by more read I/Os, check for: v Synchronous write I/Os. See Using OMEGAMON to monitor buffer pool statistics on page 683. (You can also use OMEGAMON, which contains the function of DB2 BPA, to manage and buffer pool activity.) v I/O contention. In general, each synchronous read I/O typically takes from 5 to 20 milliseconds, depending on the disk device. This estimate assumes that there are no prefetch or deferred write I/Os on the same device as the synchronous I/Os. Refer to Monitoring I/O activity of data sets on page 699. v An increase in the number of users, or an increase in the amount of data. Also, drastic changes to the distribution of the data can cause the problem. Processor resource consumption: The problem might be caused by DB2 or IRLM traces, a change in access paths, or an increase in the number of users. In the OMEGAMON accounting report, DB2 processor resource consumption is indicated in the field for class 2 CPU TIME ( C ). Other read suspensions: The accumulated wait time for read I/O done under a thread other than this one. It includes time for: v Sequential prefetch v List prefetch v Sequential detection v Synchronous read I/O performed by a thread other than the one being reported
| | | |
650
Administration Guide
Generally, an asynchronous read I/O for sequential prefetch or sequential detection takes about 0.1 to 1 milliseconds per page. List prefetch takes about 0.2 to 2 milliseconds per page. In the OMEGAMON accounting report, other read suspensions are reported in the field OTHER READ I/O ( D ). Other write suspensions: The accumulated wait time for write I/O done under a thread other than this one. It includes time for: v Asynchronous write I/O v Synchronous write I/O performed by a thread other than the one being reported As a guideline, an asynchronous write I/O takes 0.1 to 2 milliseconds per page. In theOMEGAMON accounting report, other write suspensions are reported in the field OTHER WRTE I/O ( E ). Service task suspensions: The accumulated wait time from switching synchronous execution units, by which DB2 switches from one execution unit to another. The most common contributors to service task suspensions are: v Wait for phase 2 commit processing for updates, inserts, and deletes (UPDATE COMMIT). You can reduce this wait time by allocating the DB2 primary log on a faster disk. You can also help to reduce the wait time by reducing the number of commits per unit of work. v Wait for OPEN/CLOSE service task (including HSM recall). You can minimize this wait time by using two strategies. If DSMAX is frequently reached, increase DSMAX. If DSMAX is not frequently reached, change CLOSE YES to CLOSE NO on data sets that are used by critical applications. v Wait for SYSLGRNG recording service task. v Wait for data set extend/delete/define service task (EXT/DEL/DEF). You can minimize this wait time by defining larger primary and secondary disk space allocation for the table space. v Wait for other service tasks (OTHER SERVICE). In the OMEGAMON accounting report, the total of this information is reported in the field SER.TASK SWTCH ( F ). The field is the total of the five fields that follow it. If several types of suspensions overlap, the sum of their wait times can exceed the total clock time that DB2 spends waiting. Therefore, when service task suspensions overlap other types, the wait time for the other types of suspensions is not counted. Archive log mode (QUIESCE): The accumulated time the thread was suspended while processing ARCHIVE LOG MODE(QUIESCE). In the OMEGAMON accounting report, this information is reported in the field ARCH.LOG (QUIES) ( G ). Archive log read suspension: This is the accumulated wait time the thread was suspended while waiting for a read from an archive log on tape. In the OMEGAMON accounting report, this information is reported in the field ARCHIVE LOG READ ( H ). Drain lock suspension: The accumulated wait time the thread was suspended while waiting for a drain lock. If this value is high, see Installation options for
| | | | | | | | | | |
651
wait times on page 837, and consider running the OMEGAMON locking reports for additional detail. In the OMEGAMON accounting report, this information is reported in the field DRAIN LOCK ( I ). Claim release suspension: The accumulated wait time the drainer was suspended while waiting for all claim holders to release the object. If this value is high, see Installation options for wait times on page 837, and consider running the OMEGAMON locking reports for additional details. In the OMEGAMON accounting report, this information is reported in the field CLAIM RELEASE ( J ). Page latch suspension: This field shows the accumulated wait time because of page latch contention. As an example, when the RUNSTATS and COPY utilities are run with the SHRLEVEL(CHANGE) option, they use a page latch to serialize the collection of statistics or the copying of a page. The page latch is a short duration lock. If this value is high, the OMEGAMON locking reports can provide additional data to help you determine which object is the source of the contention. In a data sharing environment, high page latch contention could occur in a multithreaded application that runs on multiple members and requires many inserts. The OMEGAMON lock suspension report shows this suspension for page latch contention in the other category. If the suspension is on the index leaf page, use one of the following strategies: v Make the inserts random v Drop the index v Perform the inserts from a single member If the page latch suspension is on a space map page, use the member cluster option for the table space. In the OMEGAMON accounting report, this information is reported in the field PAGE LATCH ( K ). Not-accounted-for DB2 time: The DB2 accounting class 2 elapsed time that is not recorded as class 2 CPU time or class 3 suspensions. The most common contributors to this category are: v z/OS paging v Processor wait time v On DB2 requester systems, the amount of time waiting for requests to be returned from either VTAM or TCP/IP, including time spent on the network and time spent handling the request in the target or server systems v Time spent waiting for parallel tasks to complete (when query parallelism is used for the query) v Some online performance monitoring In the OMEGAMON accounting report, this information is reported in the field NOT ACCOUNT ( L ).
| | |
652
Administration Guide
1. If the problem is inside DB2, determine which plan has the longest response time. If the plan can potentially allocate many different packages or DBRMs, determine which packages or DBRMs have the longest response time. Or, if you have a record of past history, determine which transactions show the largest increases. Compare class 2 CPU time, class 3 time, and not accounted time. If your performance monitoring tool does not specify times other than class 2 and class 3, then you can determine the not accounted for time with the following formula:
Not accounted time = Class 2 elapsed time - Class 2 CPU time - Total class 3 time
2. If the class 2 CPU time is high, investigate by doing the following: v Check to see whether unnecessary trace options are enabled. Excessive performance tracing can be the reason for a large increase in class 2 CPU time. v Check the SQL statement count, the getpage count, and the buffer update count on the OMEGAMON accounting report. If the profile of the SQL statements has changed significantly, review the application. If the getpage counts or the buffer update counts change significantly, check for changes to the access path and significant increases in the number of rows in the tables that are accessed. v Use the statistics report to check buffer pool activity, including the buffer pool thresholds. If buffer pool activity has increased, be sure that your buffer pools are properly tuned. For more information on buffer pools, see Tuning database buffer pools on page 671. v Use EXPLAIN to check the efficiency of the access paths for your application. Based on the EXPLAIN results: Use package-level accounting reports to determine which package or DBRM has a long elapsed time. In addition, use the class 7 CPU time for packages to determine which package or DBRM has the largest CPU time or the greatest increase in CPU time. Use the OMEGAMON SQL activity report to analyze specific SQL statements. You can also use OMEGAMON to analyze specific SQL statements, including the currently running SQL statement. If you have a history of the performance of the affected application, compare current EXPLAIN output to previous access paths and costs. Check that RUNSTATS statistics are current. Check that databases have been reorganized using the REORG utility. Check which indexes are used and how many columns are accessed. Has your application used an alternative access path because an index was dropped? Examine joins and subqueries for efficiency. See Chapter 34, Using EXPLAIN to improve SQL performance, on page 931 for help in understanding access path selection and analyzing access path problems. DB2 Visual Explain can give you a graphic display on your workstation of your EXPLAIN output. v Check the counts in the locking section of the OMEGAMON accounting report. If locking activity has increased, see Chapter 31, Improving concurrency, on page 813. For a more detailed analysis, use the deadlock or timeout traces from statistics trace class 3 and the lock suspension report or trace.
653
3. If class 3 time is high, check the individual types of suspensions in the Class 3 Suspensions section of the OMEGAMON accounting report. (The fields referred to here are in Figure 69 on page 648). v If LOCK/LATCH ( A ), DRAIN LOCK ( I ), or CLAIM RELEASE ( J ) time is high, see Chapter 31, Improving concurrency, on page 813. v If SYNCHRON. I/O ( B ) time is high, see Synchronous I/O suspension time on page 649. v If OTHER READ I/O ( D ) time is high, check prefetch I/O operations, disk contention and the tuning of your buffer pools. v If OTHER WRITE I/O ( E ) time is high, check the I/O path, disk contention, and the tuning of your buffer pools. v If SER.TASK SWTCH ( F ) is high, check open and close activity, as well as commit activity. A high value could also be caused by: SYSLGRNG recording service Data set extend/delete/define service Consider also, the possibility that DB2 is waiting for Hierarchical Storage Manager (HSM) to recall data sets that had been migrated to tape. The amount of time that DB2 waits during the recall is specified on the RECALL DELAY parameter on installation panel DSNTIPO. If accounting class 8 trace was active, each of these suspension times is available on a per-package or per-DBRM basis in the package block of the OMEGAMON accounting report. 4. If NOT ACCOUNT. ( L ) time is high, check for paging activity, processor wait time, return wait time for requests to be returned from VTAM or TCP/IP, wait time for completion of parallel tasks, and the use of online performance monitors. Turn off or reduce the intensity of online monitoring to eliminate or significantly reduce a high NOT ACCOUNT time that is caused by some monitors. A high NOT ACCOUNT time is acceptable if it is caused by wait time for completion of parallel tasks. v Use RMF reports to analyze paging, CPU utilization, and other workload activities. v Check the SER.TASK SWTCH field in the Class 3 Suspensions section of the OMEGAMON accounting reports. Figure 70 on page 655 shows which reports you might use, depending on the nature of the problem, and the order in which to look at them.
| | |
654
Administration Guide
EXPLAIN
Deadlock trace
Statistics
SQL activity
Timeout trace
I/O activity
Record trace
Locking
Console log
If you suspect that the problem is in DB2, it is often possible to discover its general nature from the accounting reports. You can then analyze the problem in detail based on one of the branches shown in Figure 70: v Follow the first branch, Application or data problem, when you suspect that the problem is in the application itself or in the related data. Also use this path for a further breakdown of the response time when no reason can be identified. v The second branch, Concurrency problem, shows the reports required to investigate a lock contention problem. This is illustrated in Scenario for analyzing concurrency on page 876. v Follow the third branch for a Global problem, such as an excessive average elapsed time per I/O. A wide variety of transactions could suffer similar problems. Before starting the analysis in any of the branches, start the DB2 trace to support the corresponding reports. When starting the DB2 trace: v Refer to OMEGAMON Report Reference for the types and classes needed for each report. v To make the trace data available as soon as an experiment has been carried out, and to avoid flooding the SMF data sets with trace data, use a GTF data set as the destination for DB2 performance trace data. Alternatively, use the Collect Report Data function in OMEGAMON to collect performance data. You specify only the report set, not the DB2 trace types or classes you need for a specific report. Collect Report Data lets you collect data in a TSO data set that is readily available for further processing. No SMF or GTF handling is required. v To limit the amount of trace data collected, you can restrict the trace to particular plans or users in the reports for SQL activity or locking. However, you cannot so restrict the records for performance class 4, which traces asynchronous I/O for specific page sets. You might want to consider turning on selective traces and be aware of the added costs incurred by tracing.
655
If the problem is not in DB2, check the appropriate reports from a CICS or IMS reporting tool. When CICS or IMS reports identify a commit, the timestamp can help you locate the corresponding OMEGAMON accounting trace report. | | | | | | You can match DB2 accounting records with CICS accounting records. If you specify ACCOUNTREC(UOW) or ACCOUNTREC(TASK) on the DB2ENTRY RDO definition, the CICS LU 6.2 token is included in the DB2 trace records, in field QWHCTOKN of the correlation header. To help match CICS and DB2 accounting records, specify ACCOUNTREC(UOW) or ACCOUNTREC(TASK) in the DB2ENTRY definition. That writes a DB2 accounting record after every transaction. As an alternative, you can produce OMEGAMON accounting reports that summarize accounting records by CICS transaction ID. Use the OMEGAMON function Correlation Translation to select the subfield containing the CICS transaction ID for reporting. You can synchronize the statistics recording interval with the RMF reporting interval, using the STATIME and SYNCVAL subsystem parameters. STATIME specifies the length of the statistics interval, and SYNCVAL specifies that the recording interval is synchronized with some part of the hour. Example: If the RMF reporting interval is 15 minutes, you can set the STATIME to 15 minutes and SYNCVAL to 0 to synchronize with RMF at the beginning of the hour. These values cause DB2 statistics to be recorded at 15, 30, 45, and 60 minutes past the hour, matching the RMF report interval. To generate Statistics reports at one minute interval, you can set STATIME to 1 and SYNCVAL 0 to synchronize with RMF at the beginning of the hour. These settings will generate statistics reports at a more granular level, simplifying the task of identifying short term spikes issues. Synchronizing the statistics recording interval across data sharing members and with the RMF reporting interval is helpful because having the DB2 statistics, RMF, and CF data for identical time periods makes the problem analysis more accurate.
| | | | | | | # # # # # | | |
656
Administration Guide
657
be. If data characteristics of the table vary significantly over time, you should keep the catalog current with those changes. RUNSTATS is most beneficial for the following: v v v v Table spaces that contain frequently accessed tables Tables involved in a sort Tables with many rows Tables against which SELECT statements having many search arguments are performed
| | | | | | | | | | |
For some tables, you will find no good time to run RUNSTATS. For example, you might use some tables for work that is in process. The tables might have only a few rows in the evening when it is convenient to run RUNSTATS, but they might have thousands or millions of rows in them during the day. For such tables, consider these possible approaches: v Set the statistics to a relatively high number and hope your estimates are appropriate. v Use volatile tables. For information about defining a table as volatile, see DB2 SQL Reference. Whichever approach that you choose, monitor the tables because optimization is adversely affected by incorrect information.
658
Administration Guide
holds your additional data on another page. When several records are physically located out of sequence, performance suffers. The default for PCTFREE for table spaces is 5 (5% of the page is free). If you have previously used a large PCTFREE to force one row per page, you should instead use MAXROWS 1 on the CREATE or ALTER TABLESPACE statement. MAXROWS has the advantage of maintaining the free space even when new data is inserted. The default for indexes is 10. The maximum amount of space that is left free in index nonleaf pages is 10%, even if you specify a value higher than 10 for PCTFREE. To determine the amount of free space currently on a page, run the RUNSTATS utility and examine the PERCACTIVE column of SYSIBM.SYSTABLEPART. See Part 2 of DB2 Utility Guide and Reference for information about using RUNSTATS.
659
| | |
When to use FREEPAGE: Use FREEPAGE rather than PCTFREE if MAXROWS is 1 or rows are larger than half a page because you cannot insert a second row on a page. Additional recommendations: v For concurrency, use MAXROWS or larger PCTFREE values for small tables and shared table spaces that use page locking. This reduces the number of rows per page, thus reducing the frequency that any given page is accessed. v For the DB2 catalog table spaces and indexes, use the defaults for PCTFREE. If additional free space is needed, use FREEPAGE. End of General-use Programming Interface
660
Administration Guide
devices also improves performance for nonpartitioned table spaces. You might consider partitioning any nonpartitioned table spaces that have excessive I/O contention at the data set level.
| | | | | | |
661
| | | | |
example, to ensure that you have 5 data sets for the nonpartitioned index, and your nonpartitioned index is 10 MB (and not likely to grow much), specify PIECESIZE 2M. If your nonpartitioned index is likely to grow, choose a larger value. When choosing a value, remember that the maximum partition size of the table space determines the maximum number of data sets that the index can use. If the underlying table space is defined with a DSSIZE of 4G or greater (or with LARGE), the limit is 254 pieces for table spaces with 254 parts or less and 4096 pieces for table spaces with more than 254 parts; otherwise, the limit is 32 pieces. Nonpartitioned indexes that were created on LARGE table spaces in Version 5 can have only 128 pieces. If an attempt is made to allocate more data sets than the limit, an abend occurs. Keep your PIECESIZE value in mind when you are choosing values for primary and secondary quantities. Ideally, although PIECESIZE has no effect on primary and secondary space allocation, the value of your primary quantity and the secondary quantities should be evenly divisible into PIECESIZE to avoid wasting space. Because the underlying data sets are always allocated at the size of PRIQTY and extended, when possible, with the size of SECQTY, understand the implications of their values with the PIECESIZE value: If PRIQTY is larger than PIECESIZE, a new data set is allocated and used when the file size exceeds PIECESIZE. Thus, part of the allocated primary storage goes unused, and no secondary extents are created. If PRIQTY is smaller than PIECESIZE and SECQTY is not zero, secondary extents are created until the total file size equals or exceeds PIECESIZE. After the allocation of a secondary extent causes the total file size to meet or exceed PIECESIZE, a new data set is allocated and used. When the total file size exceeds PIECESIZE, the part of secondary storage that is allocated beyond PIECESIZE goes unused. If PRIQTY is smaller than PIECESIZE and SECQTY is zero, an unavailable resource message is returned when the data set fills up. No secondary extents are created nor are additional data sets allocated. v Identifying suitable nonpartitioned indexes. If a nonpartitioned index that has a lot of I/O and a high IOS queue time, consider using the Parallel Access Volumes (PAV) feature and the multiple allegiance feature. For more information about these features, see Storage servers and channel subsystems on page 715. Also, consider breaking up the index into smaller pieces. Use the statistics trace to identify I/O intensive data sets. IFCID 199 contains information about every data set that averages more than one I/O per second during the statistics interval. IOS queue time that is 2 or 3 times higher than connect time is considered high. The RMF (Resource Measurement Facility) Device Activity report provides IOS time and CONN time. v Determining the number of pieces a nonpartitioned index is using. You can use one of the following techniques to determine the number of pieces that a nonpartitioned index uses: For DB2-managed data sets, use access method services LISTCAT to check the number of data sets that have been created. For user-managed data sets, examine the high-used RBA (HURBA) for each data set. End of General-use Programming Interface
662
Administration Guide
| | | |
For a single query, the recommended number of work file disk volumes to have is one-fifth the maximum number of data partitions, with 5 as a minimum and 50 as a maximum. For concurrently running queries, multiply this value by the number of concurrent queries. In addition, in a query parallelism environment, the number of work file disk volumes should be at least equal to the maximum number of parallel operations that is seen for queries in the given workload. Place these volumes on different channel or control unit paths. Monitor the I/O activity for the work file table spaces, because you might need to further separate this work file activity to avoid contention. As the amount of work file activity increases, consider increasing the size of the buffer pool for work files to support concurrent activities more efficiently. The general recommendation for the work file buffer pool is to increase the size to minimize the following buffer pool statistics: v MERGE PASSES DEGRADED, which should be less than 1% of MERGE PASS REQUESTED v WORKFILE REQUESTS REJECTED, which should be less than 1% of WORKFILE REQUEST ALL MERGE PASSES v Synchronous read I/O, which should be less than 1% of pages read by prefetch v Prefetch quantity of 4 or less, which should be near 8
| |
During the installation or migration process, you allocated table spaces for 4-KB, 8-KB, 16-KB, and 32-KB buffering. Steps to create an additional work file table space: Use the following steps to create a new work file table space, xyz. (If you are using DB2-managed data sets, omit the step to create the data sets.) 1. Define the required data sets using the VSAM DEFINE CLUSTER statement. You might want to use the definitions in the edited, installation job DSNTIJTM as a model. For more information about job DSNTIJTM and the number of work files, see DB2 Installation Guide. 2. Create the work file table space by entering the following SQL statement:
CREATE TABLESPACE xyz IN DSNDB07 BUFFERPOOL BP7 CLOSE NO USING VCAT DSNC810;
| | |
663
| | | |
664
Administration Guide
665
v Using fixed-length records on page 667 Other ways to reduce DB2 consumption of processor resources include: v Consider caching authorizations for plans, packages, and routines (user-defined functions and stored procedures). See Caching authorization IDs for best performance on page 151 for more information. v Use an isolation level of cursor stability and CURRENTDATA(NO) to allow lock avoidance. See The ISOLATION option on page 850 for more information. v Avoid using excess granularity for locking, such as row-level locking. See LOCKSIZE clause of CREATE and ALTER TABLESPACE on page 842. v Reorganize indexes and table spaces. Reorganizing can improve the performance of access paths. Reorganizing indexes and table spaces is also important after table definition changes, such as changing the data type of a column, so that the data is converted to its new definition. Otherwise, DB2 must track the data and apply the changes as the data is accessed. See When to reorganize indexes and table spaces on page 924. v Ensure that your access paths are effective. Have only the indexes that you need. Update statistics and rebind as necessary. See Gathering monitor statistics and update statistics on page 916 and Whether to rebind after gathering statistics on page 927.
| | | | | | | | | | | | | |
Global trace
Global trace requires 2% to 100% additional processor utilization. If conditions permit at your site, the DB2 global trace should be turned off. You can do this by specifying NO for the field TRACE AUTO START on panel DSNTIPN at installation. Then, if the global trace is needed for serviceability, you can start it using the START TRACE command.
666
Administration Guide
application issues. Typically, an online transaction incurs an additional 2.5% when running with accounting class 2. A typical batch query application, which accesses DB2 more often, incurs about 10% overhead when running with accounting class 2. If most of your work is through CICS, you most likely do not need to run with class 2, because the class 1 and class 2 times are very close. | | Exception: If you are using CICS Transaction Server for z/OS 2.2 with the Open Transaction Environment (OTE), activate and run class 2. If you have very light DB2 usage and you are using Measured Usage, then you need the SMF 89 records. In other situations, be sure that SMF 89 records are not recorded to avoid this overhead.
Audit trace
The performance impact of auditing is directly dependent on the amount of audit data produced. When the audit trace is active, the more tables that are audited and the more transactions that access them, the greater the performance impact. The overhead of audit trace is typically less than 5%. When estimating the performance impact of the audit trace, consider the frequency of certain events. For example, security violations are not as frequent as table accesses. The frequency of utility runs is likely to be measured in executions per day. Alternatively, authorization changes can be numerous in a transaction environment.
Performance trace
| | | | The combined overhead of all performance classes runs from about 20% to 100%. The overhead for performance trace classes 1 through 3 is typically in the range of 5% to 30%. Therefore, turn on only the performance trace classes required to address a specific performance problem and qualify the trace as much as possible to limit the that is data gathered to only the data that you need. For example, qualify the trace by the plan name and IFCID. Suppressing other trace options, such as TSO, IRLM, z/OS, IMS, CICS, and other trace options can also reduce overhead.
667
End user response time This is the time from the moment the end user presses the enter key until he or she receives the first response back at the terminal. DB2 accounting elapsed times These times are collected in the records from the accounting trace and can be found in the OMEGAMON accounting reports. They are taken over the accounting interval between the point where DB2 starts to execute the first SQL statement, and the point preceding thread termination or reuse by a different user (sign-on). This interval excludes the time spent creating a thread, and it includes a portion of the time spent terminating a thread. For parallelism, there are special considerations for doing accounting. See Monitoring parallel operations on page 1001 for more information. Elapsed times for stored procedures or user-defined functions separate the time spent in the allied address space and the time spent in the stored procedures address space. There are two elapsed times: v Class 1 elapsed time This time is always presented in the accounting record and shows the duration of the accounting interval. It includes time spent in DB2 as well as time spent in the front end. In the accounting reports, it is referred to as application time. v Class 2 elapsed time Class 2 elapsed time, produced only if the accounting class 2 is active, counts only the time spent in the DB2 address space during the accounting interval. It represents the sum of the times from any entry into DB2 until the corresponding exit from DB2. It is also referred to as the time spent in DB2. If class 2 is not active for the duration of the thread, the class 2 elapsed time does not reflect the entire DB2 time for the thread, but only the time when the class was active. DB2 total transit time In the particular case of an SQL transaction or query, the total transit time is the elapsed time from the beginning of create thread, or sign-on of another authorization ID when reusing the thread, until either the end of the thread termination, or the sign-on of another authorization ID.
668
Administration Guide
Figure 71. Transaction response times. Class 1 is standard accounting data. Class 2 is elapsed and processor time in DB2. Class 3 is elapsed wait time in DB2. Standard accounting data is provided in IFCID 0003, which is turned on with accounting class 1. When accounting classes 2 and 3 are turned on as well, IFCID 0003 contains additional information about DB2 times and wait times.
669
670
Administration Guide
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
Proper tuning of your buffer pools, EDM pools, RID pools, and sort pools can improve the response time and throughput for your applications and provide optimum resource utilization. Using data compression can also improve buffer-pool hit ratios and reduce table space I/O rates. For more information on compression, see Compressing your data on page 708. This chapter covers the following topics: v Tuning database buffer pools v Tuning EDM storage on page 685 v Increasing RID pool size on page 689 v Controlling sort pool size and sort processing on page 690
671
Read operations
DB2 uses three read mechanisms: normal read, sequential prefetch, and list sequential prefetch. Normal read: Normal read is used when just one or a few consecutive pages are retrieved. The unit of transfer for a normal read is one page. Sequential prefetch: Sequential prefetch is performed concurrently with other operations of the originating application program. It brings pages into the buffer pool before they are required and reads several pages with a single I/O operation. Sequential prefetch can be used to read data pages, by table space scans or index scans with clustered data reference. It can also be used to read index pages in an index scan. Sequential prefetch allows CP and I/O operations to be overlapped. See Sequential prefetch (PREFETCH=S) on page 973 for a complete description of sequential prefetch. List sequential prefetch: List sequential prefetch is used to prefetch data pages that are not contiguous (such as through non-clustered indexes). List prefetch can also be used by incremental image copy. For a complete description of the mechanism, see List prefetch (PREFETCH=L) on page 974.
Write operations
Write operations are usually performed concurrently with user requests. Updated pages are queued by data set until they are written when: v A checkpoint is taken. v The percentage of updated pages in a buffer pool for a single data set exceeds a preset limit called the vertical deferred write threshold (VDWQT). For more information on this threshold, see Buffer pool thresholds on page 673. v The percentage of unavailable pages in a buffer pool exceeds a preset limit called the deferred write threshold (DWQT). For more information on this threshold, see Buffer pool thresholds on page 673. Table 104 lists how many pages DB2 can write in a single I/O operation.
Table 104. Number of pages that DB2 can write in a single I/O operation Page size Number of pages 64
2 KB
672
Administration Guide
Table 104. Number of pages that DB2 can write in a single I/O operation (continued) Page size 4 KB 8 KB 16 KB 32 KB Number of pages 32 16 8 4
| |
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
673
Fixed thresholds
Some thresholds, like the immediate write threshold, you cannot change. Monitoring buffer pool usage includes noting how often those thresholds are reached. If they are reached too often, the remedy is to increase the size of the buffer pool, which you can do with the ALTER BUFFERPOOL command. Increasing the size, though, can affect other buffer pools, depending on the total amount of real storage available for your buffers. The fixed thresholds are more critical for performance than the variable thresholds. Generally, you want to set buffer pool sizes large enough to avoid reaching any of these thresholds, except occasionally. Each of the fixed thresholds is expressed as a percentage of the buffer pool that might be occupied by unavailable pages. From the highest value to the lowest value, the fixed thresholds are: v Immediate write threshold (IWTH): 97.5% This threshold is checked whenever a page is to be updated. If the threshold has been exceeded, the updated page is written to disk as soon as the update completes. The write is synchronous with the SQL request; that is, the request waits until the write is completed. The two operations do not occur concurrently. Reaching this threshold has a significant effect on processor usage and I/O resource consumption. For example, updating three rows per page in 10 sequential pages ordinarily requires one or two write operations. However, when IWTH has been exceeded, the updates require 30 synchronous writes. Sometimes DB2 uses synchronous writes even when the IWTH has not been exceeded. For example, when more than two checkpoints pass without a page being written, DB2 uses synchronous writes. Situations such as these do not indicate a buffer shortage. v Data management threshold (DMTH): 95% This threshold is checked before a page is read or updated. If the threshold is not exceeded, DB2 accesses the page in the buffer pool once for each page, no matter how many rows are retrieved or updated in that page. If the threshold is exceeded, DB2 accesses the page in the buffer pool once for each row that is retrieved or updated in that page. Recommendation: Avoid reaching the DMTH because it has a significant effect on processor usage. The DMTH is maintained for each individual buffer pool. When the DMTH is reached in one buffer pool, DB2 does not release pages from other buffer pools. v Sequential prefetch threshold (SPTH): 90% This threshold is checked at two different times: Before scheduling a prefetch operation. If the threshold has been exceeded, the prefetch is not scheduled. During buffer allocation for an already-scheduled prefetch operation. If the threshold has been exceeded, the prefetch is canceled. When the sequential prefetch threshold is reached, sequential prefetch is inhibited until more buffers become available. Operations that use sequential prefetch, such as those using large and frequent scans, are adversely affected.
674
Administration Guide
From highest to lowest default value, the variable thresholds are: v Sequential steal threshold (VPSEQT) This threshold is a percentage of the buffer pool that might be occupied by sequentially accessed pages. These pages can be in any state: updated, in-use, or available. Hence, any page might or might not count toward exceeding any other buffer pool threshold. The default value for this threshold is 80%. You can change that to any value from 0% to 100% by using the VPSEQT option of the ALTER BUFFERPOOL command. This threshold is checked before stealing a buffer for a sequentially accessed page instead of accessing the page in the buffer pool. If the threshold has been exceeded, DB2 tries to steal a buffer holding a sequentially accessed page rather than one holding a randomly accessed page. Setting the threshold to 0% would prevent any sequential pages from taking up space in the buffer pool. In this case, prefetch is disabled, and any sequentially accessed pages are discarded as soon as they are released. Setting the threshold to 100% allows sequential pages to monopolize the entire buffer pool. v Virtual buffer pool parallel sequential threshold (VPPSEQT) This threshold is a portion of the buffer pool that might be used to support parallel operations. It is measured as a percentage of the sequential steal threshold (VPSEQT). Setting VPPSEQT to zero disables parallel operation. The default value for this threshold is 50% of the sequential steal threshold (VPSEQT). You can change that to any value from 0% to 100% by using the VPPSEQT option on the ALTER BUFFERPOOL command. v Virtual buffer pool assisting parallel sequential threshold (VPXPSEQT) This threshold is a portion of the buffer pool that might be used to assist with parallel operations initiated from another DB2 in the data sharing group. It is measured as a percentage of VPPSEQT. Setting VPXPSEQT to zero disallows this DB2 from assisting with Sysplex query parallelism at run time for queries that use this buffer pool. For more information about Sysplex query parallelism, see Chapter 6 of DB2 Data Sharing: Planning and Administration. The default value for this threshold is 0% of the parallel sequential threshold (VPPSEQT). You can change that to any value from 0% to 100% by using the VPXPSEQT option on the ALTER BUFFERPOOL command. v Deferred write threshold (DWQT) This threshold is a percentage of the buffer pool that might be occupied by unavailable pages, including both updated pages and in-use pages. # # # The default value for this threshold is 30%. You can change that to any value from 0% to 90% by using the DWQT option on the ALTER BUFFERPOOL command. DB2 checks this threshold when an update to a page is completed. If the percentage of unavailable pages in the buffer pool exceeds the threshold, write operations are scheduled for enough data sets (at up to 128 pages per data set) to decrease the number of unavailable buffers to 10% below the threshold. For example, if the threshold is 50%, the number of unavailable buffers is reduced to 40%. When the deferred write threshold is reached, the data sets with the oldest updated pages are written asynchronously. DB2 continues writing pages until the ratio goes below the threshold. v Vertical deferred write threshold (VDWQT)
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
675
# #
| | | | | | | | | | | |
This threshold is similar to the deferred write threshold, but it applies to the number of updated pages for a single page set in the buffer pool. If the percentage or number of updated pages for the data set exceeds the threshold, writes are scheduled for that data set, up to 128 pages. You can specify this threshold in one of two ways: As a percentage of the buffer pool that might be occupied by updated pages from a single page set. The default value for this threshold is 5%. You can change the percentage to any value from 0% to 90%. As the total number of buffers in the buffer pool that might be occupied by updated pages from a single page set. You can specify the number of buffers from 0 to 9999. If you want to use the number of buffers as your threshold, you must set the percentage threshold to 0. Changing the threshold: Change the percent or number of buffers by using the VDWQT keyword on the ALTER BUFFERPOOL command. Because any buffers that count toward VDWQT also count toward DWQT, setting the VDWQT percentage higher than DWQT has no effect: DWQT is reached first, write operations are scheduled, and VDWQT is never reached. Therefore, the ALTER BUFFERPOOL command does not allow you to set the VDWQT percentage to a value greater than DWQT. You can specify a number of buffers for VDWQT than is higher than DWQT, but again, with no effect. This threshold is overridden by certain DB2 utilities, which use a constant limit of 64 pages rather than a percentage of the buffer pool size. LOAD, REORG, and RECOVER use a constant limit of 128 pages. Setting VDWQT to 0: If you set VDWQT to zero, DB2 implicitly uses the minimum value of 1% of the buffer pool (a specific number of pages) to avoid synchronous writes to disk. The number of pages is determined by the buffer pool page size, as shown in Table 105:
Table 105. Number of change pages based on buffer pool size Buffer pool page size 4 KB 8 KB 16 KB 32 KB Number of changed pages 40 24 16 12
676
Administration Guide
(and possibly the data management threshold and the immediate write threshold) to be reached frequently. You might need to set DWQT and VDWQT lower in that case. Pages are rarely referenced: Suppose that you have a customer table in a bank that has millions of rows that are accessed randomly or are updated sequentially in batch. In this case, lowering the DWQT or VDWQT thresholds (perhaps down to 0) can avoid a surge of write I/Os caused by DB2 checkpoint. Lowering those thresholds causes the write I/Os to be distributed more evenly over time. Secondly, this can improve performance for the storage controller cache by avoiding the problem of flooding the device at DB2 checkpoint. Query-only buffer pools: For a buffer pool used exclusively for query processing, setting VPSEQT to 100% is reasonable. If parallel query processing is a large part of the workload, set VPPSEQT and, if applicable, VPXPSEQT, to a very high value. Mixed workloads: For a buffer pool used for both query and transaction processing, the value you set for VPSEQT should depend on the respective priority of the two types of processing. The higher you set VPSEQT, the better queries tend to perform, at the expense of transactions. If you are not sure what value to set for VPSEQT, use the default setting. Buffer pools containing LOBs: Put LOB data in buffer pools that are not shared with other data. For both LOG YES and LOG NO LOBs, use a deferred write threshold (DWQT) of 0. LOBs specified with LOG NO have their changed pages written at commit time (force-at-commit processing). If you set DWQT to 0, those writes happen continuously in the background rather than in a large surge at commit. LOBs defined with LOG YES can use deferred write, but by setting DWQT to 0, you can avoid massive writes at DB2 checkpoints.
| |
677
where pages_read_from_disk is the sum of the following fields: v Number of synchronous reads (field B in Figure 73 on page 683) v Number of pages read via sequential prefetch (field C ) v Number of pages read via list prefetch (field D ) v Number of pages read via dynamic prefetch (field E ) Example: If you have 1000 getpages and 100 pages were read from disk, the equation would be as follows:
Hit ratio = (1000-100)/1000
The hit ratio in this case is 0.9. Highest hit ratio: The highest possible value for the hit ratio is 1.0, which is achieved when every page requested is always in the buffer pool. Reading index non-leaf pages tend to have a very high hit ratio since they are frequently re-referenced and thus tend to stay in the buffer pool. Lowest hit ratio: The lowest hit ratio occurs when the requested page is not in the buffer pool; in this case, the hit ratio is 0 or less. A negative hit ratio means that prefetch has brought pages into the buffer pool that are not subsequently referenced. The pages are not referenced because either the query stops before it reaches the end of the table space or DB2 must take the pages away to make room for newer ones before the query can access them. A low hit ratio is not always bad: While it might seem desirable to make the buffer hit ratio as close to 1.0 as possible, do not automatically assume a low buffer-pool hit ratio is bad. The hit ratio is a relative value, based on the type of application. For example, an application that browses huge amounts of data using table space scans might very well have a buffer-pool hit ratio of 0. What you want to watch for is those cases where the hit ratio drops significantly for the same application. In those cases, it might be helpful to investigate further. Hit ratios for additional processes: The hit ratio measurement becomes less meaningful if the buffer pool is being used by additional processes, such as work files or utilities. Some utilities and SQL statements use a special type of getpage request that reserve an empty buffer without requiring that the page be read from disk. A getpage is issued for each empty work file page without read I/O during sort input processing. The hit ratio can be calculated if the work files are isolated in their own buffer pools. If they are, then the number of getpages used for the hit ratio formula is divided in half as follows:
Hit ratio = ((getpages / 2) - pages_read_from_disk) / (getpages / 2)
678
Administration Guide
| | | | | |
| | | | | | | | | | | | | | | | |
Problems with paging: Paging occurs when the virtual storage requirements for a buffer pool exceeds the real storage capacity for the z/OS image. In this case, the least recently used data pages in the buffer pool are migrated to auxiliary storage. Subsequent access to these pages results in a page fault and the page must be brought into real storage from auxiliary storage. Paging of buffer pool storage can negatively affect DB2 performance. The statistics for PAGE-INS REQUIRED FOR WRITE and PAGE-INS REQUIRED FOR READ shown in Figure 73 on page 683 are useful in determining if the buffer pool size setting is too large for available real storage. Allocating buffer pool storage: DB2 limits the total amount of storage that is allocated for virtual buffer pools to approximately twice the amount of real storage. However, to avoid paging, it is strongly recommended that you set the total buffer pool size to less than the real storage that is available to DB2. Recommendation: The total buffer pool storage should not exceed the available real storage. DB2 allocates the minimum buffer pool storage as shown in Table 106.
Table 106. Buffer pool storage allocation Buffer pool page size 4 KB 8 KB 16 KB 32 KB Minimum number of pages allocated 2000 1000 500 250
If the amount of virtual storage that is allocated to buffer pools is more than twice the amount of real storage, you cannot increase the buffer pool size.
679
Reasons to choose a single buffer pool: If your system has any or all of the following conditions, it is probably best to choose a single 4 KB buffer pool: v It is already storage constrained. v You have no one with the application knowledge necessary to do more specialized tuning. v It is a test system. Reasons to choose more than one buffer pool: You can benefit from the following advantages if you use more than one buffer pool: v You can isolate data in separate buffer pools to favor certain applications, data, and indexes. This benefit is twofold: You can favor certain data and indexes by assigning more buffers. For example, you might improve the performance of large buffer pools by putting indexes into separate pools from data. You can customize buffer pool tuning parameters to match the characteristics of the data. For example, you might want to put tables and indexes that are updated frequently into a buffer pool with different characteristics from those that are frequently accessed but infrequently updated. v You can put work files into a separate buffer pool. This can provide better performance for sort-intensive queries. Applications that use created temporary tables use work files for those tables. Keeping work files separate allows you to monitor temporary table activity more easily. v This process of segregating different activities and data into separate buffer pools has the advantage of providing good and relatively inexpensive performance diagnosis data from statistics and accounting traces.
| | | | | | | | |
| |
680
Administration Guide
| | | | | | | | | | | | | | | | |
Recommendation: Use PGFIX(YES) for buffer pools with a high I/O rate, that is, a high number of pages read or written. For buffer pools with zero I/O, such as some read-only data or some indexes with a nearly 100% hit ratio, PGFIX(YES) is not recommended. In these cases, PGFIX(YES) does not provide a performance advantage. To prevent PGFIX(YES) buffer pools from exceeding the real storage capacity, DB2 uses an 80% threshold when allocating PGFIX(YES) buffer pools. If the threshold is exceeded, DB2 overrides the PGFIX(YES) option with PGFIX(NO).
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
681
+DISPLAY BPOOL(BP0) DETAIL DSNB401I + BUFFERPOOL NAME BP0, BUFFERPOOL ID 0, USE COUNT 47 DSNB402I + BUFFERPOOL SIZE = 2000 BUFFERS ALLOCATED = 2000 TO BE DELETED = 0 IN-USE/UPDATED = 0 DSNB406I + PAGE STEALING METHOD = LRU DSNB404I + THRESHOLDS VP SEQUENTIAL = 80 DEFERRED WRITE = 85 VERTICAL DEFERRED WRT = 80, PARALLEL SEQUENTIAL = 50 ASSISTING PARALLEL SEQ = 0 DSNB409I + INCREMENTAL STATISTICS SINCE 14:57:55 JAN 22, yyyy DSNB411I + RANDOM GETPAGE = 491222 SYNC READ I/O (R) = A 18193 SEQ. GETPAGE = 1378500 SYNC READ I/O (S) = B 0 DMTH HIT = 0 PAGE-INS REQUIRED = 460400 DSNB412I + SEQUENTIAL PREFETCH REQUESTS D = 41800 PREFETCH I/O C = 14473 PAGES READ E = 444030 DSNB413I + LIST PREFETCH REQUESTS = 9046 PREFETCH I/O = 2263 PAGES READ = 3046 DSNB414I + DYNAMIC PREFETCH REQUESTS = 6680 PREFETCH I/O = 142 PAGES READ = 1333 DSNB415I + PREFETCH DISABLED NO BUFFER = 0 NO READ ENGINE = 0 DSNB420I + SYS PAGE UPDATES = F 220425 SYS PAGES WRITTEN = G 35169 ASYNC WRITE I/O = 5084 SYNC WRITE I/O = 3 PAGE-INS REQUIRED = 45 DSNB421I + DWT HIT H = 2 VERTICAL DWT HIT = I 0 NO WRITE ENGINE = 0 DSNB440I + PARALLEL ACTIVITY PARALLEL REQUEST = 0 DEGRADED PARALLEL = 0 DSNB441I + LPL ACTIVITY PAGES ADDED = 0 DSN9022I + DSNB1CMD '+DISPLAY BPOOL' NORMAL COMPLETION Figure 72. Sample output from the DISPLAY BUFFERPOOL command.
| |
In Figure 72, find the following fields: v SYNC READ I/O (R) ( A ) shows the number of random synchronous read I/O operations. SYNC READ I/O (S) ( B ) shows the number of sequential synchronous read I/O operations. Sequential synchronous read I/Os occur when prefetch is disabled. To determine the total number of synchronous read I/Os, add SYNC READ I/O (S) and SYNC READ I/O (R). v In message DSNB412I, REQUESTS ( C ) shows the number of times that sequential prefetch was triggered, and PREFETCH I/O ( D ) shows the number of times that sequential prefetch occurred. PAGES READ ( E ) shows the number of pages read using sequential prefetch. v SYS PAGE UPDATES ( F ) corresponds to the number of buffer updates. v SYS PAGES WRITTEN ( G ) is the number of pages written to disk. v DWT HIT ( H ) is the number of times the deferred write threshold (DWQT) was reached. This number is workload dependent. v VERTICAL DWT HIT ( I ) is the number of times the vertical deferred write threshold (VDWQT) was reached. This value is per data set, and it is related to the number of asynchronous writes.
| | |
Because the number of synchronous read I/Os ( A ) and the number of SYS PAGE UPDATES ( F ) are relatively high, you would want to tune the buffer pools by changing the buffer pool specifications. For example, you could increase the buffer
682
Administration Guide
| | |
pool size to reduce the amount of unnecessary I/O, which would make buffer operations more efficient. To do that, enter the following command:
-ALTER BUFFERPOOL(BP0) VPSIZE(6000)
To obtain buffer pool information on a specific data set, you can use the LSTATS option of the DISPLAY BUFFERPOOL command. For example, you can use the LSTATS option to: v Provide page count statistics for a certain index. With this information, you could determine whether a query used the index in question, and perhaps drop the index if it was not used. v Monitor the response times on a particular data set. If you determine that I/O contention is occurring, you could redistribute the data sets across your available disks. This same information is available with IFCID 0199 (statistics class 8). For more information on the ALTER BUFFERPOOL or DISPLAY BUFFERPOOL commands, see Chapter 2 of DB2 Command Reference.
TOT4K READ OPERATIONS --------------------------BPOOL HIT RATIO (%) A GETPAGE REQUEST GETPAGE REQUEST-SEQUENTIAL GETPAGE REQUEST-RANDOM SYNCHRONOUS READS B SYNCHRON. READS-SEQUENTIAL SYNCHRON. READS-RANDOM GETPAGE PER SYN.READ-RANDOM SEQUENTIAL PREFETCH REQUEST SEQUENTIAL PREFETCH READS PAGES READ VIA SEQ.PREFETCH S.PRF.PAGES READ/S.PRF.READ
QUANTITY TOT4K WRITE OPERATIONS QUANTITY -------- --------------------------- -------73.12 BUFFER UPDATES 220.4K PAGES WRITTEN 35169.00 1869.7K BUFF.UPDATES/PAGES WRITTEN H 6.27 1378.5K 491.2K SYNCHRONOUS WRITES I 3.00 ASYNCHRONOUS WRITES 5084.00 54187.00 35994.00 PAGES WRITTEN PER WRITE I/O J 5.78 18193.00 HORIZ.DEF.WRITE THRESHOLD 2.00 27.00 VERTI.DEF.WRITE THRESHOLD 0.00 DM THRESHOLD K 0.00 41800.00 WRITE ENGINE NOT AVAILABLE L 0.00 14473.00 PAGE-INS REQUIRED FOR WRITE M 45.00 C 444.0K 30.68
LIST PREFETCH REQUESTS 9046.00 LIST PREFETCH READS 2263.00 PAGES READ VIA LST PREFETCH D 3046.00 L.PRF.PAGES READ/L.PRF.READ 1.35 DYNAMIC PREFETCH REQUESTED 6680.00 DYNAMIC PREFETCH READS 142.00 PAGES READ VIA DYN.PREFETCH E 1333.00 D.PRF.PAGES READ/D.PRF.READ 9.39 PREF.DISABLED-NO BUFFER F 0.00 PREF.DISABLED-NO READ ENG G 0.00 PAGE-INS REQUIRED FOR READ 460.4K
The formula for the buffer-pool hit ratio (fields A through E ) is explained in The buffer-pool hit ratio on page 677
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
683
Increase the buffer pool size or reduce the workload if: v Sequential prefetch is inhibited. PREF.DISABLED-NO BUFFER ( F ) shows how many times sequential prefetch is disabled because the sequential prefetch threshold (90% of the pages in the buffer pool are unavailable) has been reached. v You detect poor update efficiency. You can determine update efficiency by checking the values in both of the following fields: BUFF.UPDATES/PAGES WRITTEN ( H ) PAGES WRITTEN PER WRITE I/O ( J ) In evaluating the values you see in these fields, remember that no values are absolutely acceptable or absolutely unacceptable. Each installations workload is a special case. To assess the update efficiency of your system, monitor for overall trends rather than for absolute high values for these ratios. The following factors impact buffer updates per pages written and pages written per write I/O: Sequential nature of updates Number of rows per page Row update frequency For example, a batch program that processes a table in skip sequential mode with a high row update frequency in a dedicated environment can achieve very good update efficiency. In contrast, update efficiency tends to be lower for transaction processing applications, because transaction processing tends to be random. The following factors affect the ratio of pages written per write I/O: Checkpoint frequency. The CHECKPOINT FREQ field on installation panel DSNTIPN specifies either the number of consecutive log records written between DB2 system checkpoints or the number of minutes between DB2 system checkpoints. You can use a large value to specify CHECKPOINT FREQ in number of log records; the default value is 500 000. You can use a small value to specify CHECKPOINT FREQ in minutes. If you specify CHECKPOINT FREQ in minutes, the recommended setting is 2 to 5 minutes. At checkpoint time, I/Os are scheduled to write all updated pages on the deferred write queue to disk. If system checkpoints occur too frequently, the deferred write queue does not grow large enough to achieve a high ratio of pages written per write I/O. Frequency of active log switch. DB2 takes a system checkpoint each time the active log is switched. If the active log data sets are too small, checkpoints occur often, which prevents the deferred write queue from growing large enough to achieve a high ratio of pages written per write I/O. For recommendations on active log data set size, see Log capacity on page 703. Buffer pool size. The deferred write thresholds (VDWQT and DWQT) are a function of buffer pool size. If the buffer pool size is decreased, these thresholds are reached more frequently, causing I/Os to be scheduled more often to write some of the pages on the deferred write queue to disk. This prevents the deferred write queue from growing large enough to achieve a high ratio of pages written per write I/O. Number of data sets, and the spread of updated pages across them. The maximum number of pages written per write I/O is 32, subject to a limiting scope of 180 pages (roughly one cylinder). Example: If your application updates page 2 and page 179 in a series of pages, the two changed pages could potentially be written with one write I/O. But if your application updates page 2 and page 185 within a series of pages, writing the two changed pages would require two write I/Os because of the 180-page limit. Updated pages are placed in a deferred write queue
| | | | | | | | | | |
| | | | | | | |
684
Administration Guide
| | |
based on the data set. For batch processing it is possible to achieve a high ratio of pages written per write I/O, but for transaction processing the ratio is typically lower. For LOAD, REORG, and RECOVER, the maximum number of pages written per write I/O is 64, and there is typically no limiting scope. However, in some cases, such as loading a partitioned table space with nonpartitioned indexes, a 180-page limit scope exists. v SYNCHRONOUS WRITES ( I ) is a high value. This field counts the number of immediate writes. However, immediate writes are not the only type of synchronous write. Therefore, providing a monitoring value for the number of immediate writes can be difficult. Ignore SYNCHRONOUS WRITES when DM THRESHOLD is zero. v DM THRESHOLD ( K ) is reached. This field shows how many times a page was immediately released because the data management threshold was reached. The quantity listed for this field should be zero. Also note the following fields: v WRITE ENGINE NOT AVAILABLE ( L ) This field records the number of times that asynchronous writes were deferred because DB2 reached its maximum number of concurrent writes. You cannot change this maximum value. This field has a nonzero value occasionally. v PREF.DISABLED-NO READ ENG ( G ) This field records the number of times that a sequential prefetch was not performed because the maximum number of concurrent sequential prefetches was reached. Instead, normal reads were done. You cannot change this maximum value. v PAGE-INS REQUIRED FOR WRITE ( M ) This field records the number of page-ins that are required for a read or write I/O. When the buffer pools are first allocated, the count might be high. After the first allocations are complete, the count should be close to zero.
| | | | | | | | | | | | | | | | | | | | |
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
685
| | | | |
During the installation process, DSNTINST CLIST calculates the size of the EDM pool and the EDM DBD cache. You can check the calculated sizes on installation panel DSNTIPC. For more information on estimating and specifying the sizes, see DB2 Installation Guide. For data sharing, you might need to increase the EDM DBD cache storage estimate. For more information, see Chapter 2 of DB2 Data Sharing: Planning and Administration. Because of an internal process that changes the size of plans initially bound in one release and then are rebound in a later release, you should carefully monitor the size of the EDM pool, the EDM DBD cache, and the EDM statement cache and increase their sizes, if necessary.
| | | | | | |
| | | | | | | | | | | | | | | | | |
By designing the EDM storage pools this way, you can avoid allocation I/Os, which can represent a significant part of the total number of I/Os for a transaction. You can also reduce the processing time necessary to check whether users attempting to execute a plan are authorized to do so. EDM storage pools that are too small cause: v Increased I/O activity in DSNDB01.SCT02, DSNDB01.SPT01, and DSNDB01.DBD01 v Increased response times, due to loading the SKCTs, SKPTs, and DBDs v Fewer threads used concurrently, due to a lack of storage
686
Administration Guide
take more locks for the DBD. To reclaim storage in the DBD, use the MODIFY utility, as described in Part 2 of DB2 Utility Guide and Reference. | | | | | Monitor and manage DBDs to prevent them from becoming too large. Very large DBDs can reduce concurrency and degrade the performance of SQL operations that create or alter objects because of increased I/O and logging. DBDs that are created or altered in DB2 Version 6 or later do not need contiguous storage, but can use pieces of approximately 32 KB. Older DBDs require contiguous storage.
DYNAMIC SQL STMT QUANTITY --------------------------- -------PREPARE REQUESTS F 8897.00 FULL PREPARES G 0.00 SHORT PREPARES 9083.00 GLOBAL CACHE HIT RATIO (%) H 100.00 IMPLICIT PREPARES PREPARES AVOIDED CACHE LIMIT EXCEEDED PREP STMT PURGED LOCAL CACHE HIT RATIO (%) 0.00 0.00 0.00 0.00 N/C
The important values to monitor are: Efficiency of the EDM pool: You can measure the efficiency of the EDM pool by using the following ratios:
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
687
DBD HIT RATIO (%) C CT HIT RATIO (%) D PT HIT RATIO (%) E These ratios for the EDM pool depend upon your locations work load. In most DB2 subsystems, a value of 80% or more is acceptable. This value means that at least 80% of the requests were satisfied without I/O. The number of free pages is shown in FREE PAGES ( B ) in Figure 74. If this value is more than 20% of PAGES IN EDM STORAGE ( A ) during peak periods, the EDM pool size is probably too large. In this case, you can reduce its size without affecting the efficiency ratios significantly. EDM statement cache hit ratio: If you have caching turned on for dynamic SQL, the EDM storage statistics have information that can help you determine how successful your applications are at finding statements in the cache. See mapping macro DSNDQISE for descriptions of these fields. PREPARE REQUESTS ( F ) in Figure 74 records the number of requests to search the cache. FULL PREPARES ( G ) records the number of times that a statement was inserted into the cache, which can be interpreted as the number of times a statement was not found in the cache. To determine how often the dynamic statement was used from the cache, check the value in GLOBAL CACHE HIT RATIO ( H ). The value is calculated with the following formula:
(PREPARE REQUESTS FULL PREPARES) PREPARE REQUESTS = hit ratio
EDM pool space utilization and performance: For smaller EDM pools, space utilization or fragmentation is normally more critical than for larger EDM pools. For larger EDM pools, performance is normally more critical. DB2 emphasizes performance and uses less optimum EDM storage allocation when the EDM pool size exceeds 40 MB. For systems with large EDM pools that are greater than 40 MB to continue to use optimum EDM storage allocation at the cost of performance, you can set the keyword EDMBFIT in the DSNTIJUZ job to YES. The EDMBFIT keyword adjusts the search algorithm on systems with EDM pools that are larger than 40 MB. The default NO tells DB2 to use a first-fit algorithm while YES tells DB2 to use a better-fit algorithm. | | | | | | Recommendation: Set EDMBFIT to NO in most cases. It is especially important to set EDMBFIT to NO when a high class 24 latch (EDM LRU latch) contention exists. For example, be sure to set EDMBFIT to NO when class 24 latch contention exceeds 500 contentions per second. Recommendation: Set EDMBFIT to YES when EDMPOOL full conditions occur for an EDM pool size that exceeds 40 MB.
Use packages
By using multiple packages you can increase the effectiveness of EDM pool storage management by having smaller objects in the pool.
688
Administration Guide
Example: Three concurrent RID processing activities, with an average of 4000 RIDs each, would require 120 KB of storage, because:
3 4000 2 5 = 120KB
Whether your SQL statements that use RID processing complete efficiently or not depends on other concurrent work using the RID pool.
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
689
For sort key length and sort data length, use values that represent the maximum values for the queries you run. To determine these values, refer to fields QW0096KL (key length) and QW0096DL (data length) in IFCID 0096, as mapped by macro DSNDQW01. You can also determine these values from an SQL activity trace. If a column is in the ORDER BY clause that is not in the select clause, that column should be included in the sort data length and the sort key length as shown in the following example:
SELECT C1, C2, C3 FROM tablex ORDER BY C1, C4;
If C1, C2, C3, and C4 are each 10 bytes in length, you could estimate the sort pool size as follows: |
32000 (16 + 20 + (10 + 10 + 10 + 10)) = 2342000 bytes
Where the values are determined in the following way: v 32000 = maximum number of sort nodes v 16 = size (in bytes) of each node v 20 = sort key length (ORDER BY C1, C4) v 10+10+10+10 = sort data length (each column is 10 bytes in length)
| |
690
Administration Guide
When your application needs to sort data, the work files are allocated on a least recently used basis for a particular sort. For example, if five logical work files (LWFs) are to be used in the sort, and the installation has three work file table spaces (WFTSs) allocated, then: v LWF 1 would be on WFTS 1. v LWF 2 would be on WFTS 2. v LWF 3 would be on WFTS 3. v LWF 4 would be on WFTS 1. v LWF 5 would be on WFTS 2. To support large sorts, DB2 can allocate a single logical work file to several physical work file table spaces.
| | |
691
v If I/Os occur in the sorting process, in the merge phase DB2 uses sequential prefetch to bring pages into the buffer pool with a prefetch quantity of eight pages. However, if the buffer pool is constrained, then DB2 uses a prefetch quantity of four pages or less, or disables prefetch entirely because of the unavailability of enough pages. For any SQL statement that initiates sort activity, the OMEGAMON SQL activity reports provide information on the efficiency of the sort that is involved.
692
Administration Guide
693
| | | |
When DSMAX is reached, DB2 closes 300 data sets or 3% of the value of DSMAX, whichever number of data sets is fewer. Thus, DSMAX controls not only the limit of open data sets, but in some cases also controls the number of data sets that are closed when that limit is reached.
Modifying DSMAX
The formula used by DB2 does not take partitioned or LOB table spaces into account. Those table spaces can have many data sets. If you have many partitioned table spaces or LOB table spaces, you might need to increase DSMAX. Dont forget to consider the data sets for nonpartitioned indexes defined on partitioned table spaces. If those indexes are defined with a small PIECESIZE, there could be many data sets. You can modify DSMAX by updating field DSMAX - MAXIMUM OPEN DATA SETS on installation panel DSNTIPC. Calculating the size of DSMAX: DSMAX should be larger than the maximum number of data sets that are open and in use at one time. For the most accurate count of open data sets, refer to the OPEN/CLOSE ACTIVITY section of the OMEGAMON statistics report. Make sure the statistics trace was run at a peak period, so that you can obtain the most accurate maximum figure. | | | | | The best indicator of when to increase DSMAX is when the open and close activity of data sets is high, 1 per second as a general guideline. Refer to the OPEN/CLOSE value under the SER.TASK SWITCH section of the OMEGAMON accounting report. Consider increasing DSMAX when this value shows more than 1 event per second. To calculate the total number of data sets (rather than the number that are open during peak periods), you can do the following: 1. To find the number of simple and segmented table spaces, use the following query. The calculation assumes that you have one data set for each simple, segmented, and LOB table space. These catalog queries are included in DSNTESP in SDSNSAMP. You can use them as input to SPUFI. General-use Programming Interface Query 1
SELECT CLOSERULE, COUNT(*) FROM SYSIBM.SYSTABLESPACE WHERE PARTITIONS = 0 GROUP BY CLOSERULE;
694
Administration Guide
End of General-use Programming Interface 2. To find the number of data sets for the partitioned table spaces, use the following query, which returns the number of partitioned table spaces and the total number of partitions. Partitioned table spaces can require up to 4096 data sets for the data, and a corresponding number of data sets for each partitioned index. General-use Programming Interface Query 2
SELECT CLOSERULE, COUNT(*), SUM(PARTITIONS) FROM SYSIBM.SYSTABLESPACE WHERE PARTITIONS > 0 GROUP BY CLOSERULE;
End of General-use Programming Interface 3. To find the number of data sets required for each nonpartitioned index, use the following query. The calculation assumes that you have only one data set for each nonpartitioned index. If you use pieces, adjust accordingly. General-use Programming Interface Query 3
SELECT CLOSERULE, COUNT(*) FROM SYSIBM.SYSINDEXES T1, SYSIBM.SYSINDEXPART T2 WHERE T1.NAME = T2.IXNAME AND T1.CREATOR = T2.IXCREATOR AND T2.PARTITION = 0 GROUP BY CLOSERULE;
End of General-use Programming Interface | | | | | | | | | | | | | | | 4. To find the number of data sets for the partitioned indexes, use the following query, which returns the number of index partitions. You have one data set for each index partition. General-use Programming Interface Query 4
SELECT CLOSERULE, COUNT(*) FROM SYSIBM.SYSINDEXES T1, SYSIBM.SYSINDEXPART T2 WHERE T1.NAME = T2.IXNAME AND T1.CREATOR = T2.IXCREATOR AND T2.PARTITION > 0 GROUP BY CLOSERULE;
End of General-use Programming Interface 5. To find the total number of data sets, add the numbers that result from the four queries. (For Query 2, use the sum of the partitions that was obtained.)
Recommendations
As with many recommendations in DB2, you must weigh the cost of performance versus availability when choosing a value for DSMAX. Consider the following factors: v For best performance, you should leave enough margin in your specification of DSMAX so that frequently used data sets can remain open after they are no
Chapter 28. Improving resource utilization
695
longer referenced. If data sets are opened and closed frequently, such as every few seconds, you can improve performance by increasing DSMAX. v The number of open data sets on your subsystem that are in read/write state affects checkpoint costs and log volumes. To control how long data sets stay open in a read/write state, specify values for the RO SWITCH CHKPTS and RO SWITCH TIME fields of installation panel DSNTIPN. See Switching to read-only for infrequently updated and infrequently accessed page sets on page 697 for more information. v Consider segmented table spaces to reduce the number of data sets. To reduce open and close activity, you can try reducing the number of data sets by combining tables into segmented table spaces. This approach is most useful for development or end-user systems where there are a lot of smaller tables that can be combined into single table spaces.
696
Administration Guide
the accompanying indexes without DB2 reopening the data sets. Thus, deferred closing of page sets or partitions can improve your applications performance by avoiding I/O processing. Recommendation: For a table space whose data is continually referenced, in most cases it does not matter whether it is defined with CLOSE YES or CLOSE NO; the data sets remain open. This is also true, but less so, for a table space whose data is not referenced for short periods of time; because DB2 uses deferred close to manage data sets, the data sets are likely to be open when they are used again. You could find CLOSE NO appropriate for page sets that contain data you do not frequently use but is so performance-critical that you cannot afford the delay of opening the data sets. If the number of open data sets is a concern, choose CLOSE YES for page sets with many partitions or data sets. End of General-use Programming Interface
Switching to read-only for infrequently updated and infrequently accessed page sets
For both CLOSE YES and CLOSE NO page sets, DB2 automatically converts infrequently updated page sets or partitions from read-write to read-only state according to the values you specify in the RO SWITCH CHKPTS and RO SWITCH TIME fields of installation panel DSNTIPL. # # # # # With data sharing, DB2 uses the CLOSE YES or CLOSE NO attribute of the table space or index space to determine whether to physically close infrequently accessed page sets that have had global buffer pool dependencies. Infrequently accessed CLOSE YES page sets are physically closed. Infrequently accessed CLOSE NO page sets remain open. RO SWITCH CHKPTS is the number of consecutive DB2 checkpoints since a page set or partition was last updated; the default is 5. RO SWITCH TIME is the amount of elapsed time since a page set or partition was last updated; the default is 10 minutes. If either condition is met, the page set or partition is converted from read-write to read-only state. Updating SYSLGRNX: For both CLOSE YES and CLOSE NO page sets, SYSLGRNX entries are updated when the page set is converted from read-write state to read-only state. When this conversion occurs for table spaces, the SYSLGRNX entry is closed and any updated pages are externalized to disk. For indexes defined as COPY NO, there is no SYSLGRNX entry, but the updated pages are externalized to disk. Performance benefits of read-only switching: An infrequently used page sets conversion from read-write to read-only state results in the following performance benefits: v Improved data recovery performance because SYSLGRNX entries are more precise, closer to the last update transaction commit point. As a result, the RECOVER utility has fewer log records to process. v Minimized logging activities. Log records for page set open, checkpoint, and close operations are only written for updated page sets or partitions. Log records are not written for read-only page sets or partitions.
Chapter 28. Improving resource utilization
697
Recommendations for RO SWITCH TIME and RO SWITCH CHKPTS: In most cases, the default values are adequate. However, if you find that the amount of R/O switching is causing a performance problem for the updates to SYSLGRNX, consider increasing the value of RO SWITCH TIME, perhaps to 30 minutes.
698
Administration Guide
These lists do not include other data sets that are less crucial to DB2s performance, such as those that contain program libraries, control blocks, and formats. Those types of data sets have their own design recommendations.
699
To best monitor these result tables, keep work files in a separate buffer pool. Use IFCID 0311 in performance trace class 8 to distinguish these tables from other uses of the work file.
DB2 logging
DB2 logs changes made to data, and other significant events, as they occur. You can find background information on the DB2 log in Chapter 18, Managing the log and the bootstrap data set, on page 427. When you focus on logging performance issues, remember that the characteristics of your workload have a direct effect on log write performance. Long-running tasks that commit infrequently have a lot more data to write at commit than a typical transaction. These tasks can cause subsystem impact because of the excess storage consumption, locking contention, and resources that are consumed for a rollback. Dont forget to consider the cost of reading the log as well. The cost of reading the log directly affects how long a restart or a recovery occurs because DB2 must read the log data before applying the log records back to the table space. This section includes the following topics: v Logging performance issues and recommendations v Log capacity on page 703 v Controlling the amount of log data on page 705
| | |
Log writes
Log writes are divided into two categories: synchronous and asynchronous. Asynchronous writes: Asynchronous writes are the most common. These asynchronous writes occur when data is updated. Before- and after-image records are usually moved to the log output buffer, and control is returned to the application. However, if no log buffer is available, the application must wait for one to become available. Synchronous writes: Synchronous writes usually occur at commit time when an application has updated data. This write is called 'forcing' the log because the application must wait for DB2 to force the log buffers to disk before control is returned to the application. If the log data set is not busy, all log buffers are written to disk. If the log data set is busy, the requests are queued until it is freed. Writing to two logs: Dual logging is shown in Figure 75 on page 701.
700
Administration Guide
I/O
If there are two logs (recommended for availability), the write to the first log, in general, must complete before the write to the second log begins. The first time a log control interval is written to disk, the write I/Os to the log data sets are performed in parallel. However, if the same 4 KB log control interval is again written to disk, then the write I/Os to the log data sets must be done serially to prevent any possibility of losing log data in case of I/O errors on both copies simultaneously. Two-phase commit log writes: Because they use two-phase commit, applications that use the CICS, IMS, and RRS attachment facilities force writes to the log twice, as shown in Figure 75. The first write forces all the log records of changes to be written (if they have not been written previously because of the write threshold being reached). The second write writes a log record that takes the unit of recovery into an in-commit state. Recommendations: To improve log write performance: v Choose a large size for OUTPUT BUFFER size: The OUTPUT BUFFER field of installation panel DSNTIPL lets you specify the size of the output buffer used for writing active log data sets. The maximum size of this buffer (OUTBUFF) is 400 000 KB. Choose as large a size as your system can tolerate to decrease the number of forced I/O operations that occur because there are no more buffers. A large size can also reduce the number of wait conditions. A non-zero value for A in Figure 76 on page 702 is an indicator that your output buffer is too small. Ensure that the size you choose is backed up by real storage. A non-zero value for B in Figure 76 on page 702 is an indicator that your output buffer is too large for the amount of available real storage.
| | |
701
LOG ACTIVITY QUANTITY /SECOND /THREAD /COMMIT --------------------------- -------- ------- ------- ------READS SATISFIED-OUTPUT BUFF 211.00 0.12 N/C 0.00 READS SATISFIED-OUTP.BUF(%) 100.00 READS SATISFIED-ACTIVE LOG 0.00 0.00 N/C 0.00 READS SATISFIED-ACTV.LOG(%) 0.00 READS SATISFIED-ARCHIVE LOG 0.00 0.00 N/C 0.00 READS SATISFIED-ARCH.LOG(%) 0.00 TAPE VOLUME CONTENTION WAIT READ DELAYED-UNAVAIL.RESOUR ARCHIVE LOG READ ALLOCATION ARCHIVE LOG WRITE ALLOCAT. CONTR.INTERV.OFFLOADED-ARCH LOOK-AHEAD MOUNT ATTEMPTED LOOK-AHEAD MOUNT SUCCESSFUL UNAVAILABLE OUTPUT LOG BUFF A OUTPUT LOG BUFFER PAGED IN B LOG LOG LOG LOG LOG LOG RECORDS CREATED C CI CREATED D WRITE I/O REQ (COPY1&2) CI WRITTEN (COPY1&2) RATE FOR 1 LOG (MB/Sec) WRITE SUSPENDED 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 8.00 234.00 9.00 96.00 96.00 N/A 39.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13 0.01 0.05 0.05 0.00 0.02 N/C N/C N/C N/C N/C N/C N/C N/C N/C N/C N/C N/C N/C N/A N/C 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 2.85 0.11 1.17 1.17 N/A 0.48
v Choose fast devices for log data sets: The devices assigned to the active log data sets must be fast ones. Because of its very high sequential performance, ESS is particularly recommended in environments in which the write activity is high to avoid logging bottlenecks. v Avoid device contention: Place the copy of the bootstrap data set and, if using dual active logging, the copy of the active log data sets, on volumes that are accessible on a path different than that of their primary counterparts. v Preformat new active log data sets: Whenever you allocate new active log data sets, preformat them using the DSNJLOGF utility described in Part 3 of DB2 Utility Guide and Reference. This action avoids the overhead of preformatting the log, which normally occurs at unpredictable times.
Log reads
During a rollback, restart, and database recovery, the performance impact of log reads is evident. DB2 must read from the log and apply changes to the data on disk. Every process that requests a log read has an input buffer dedicated to that process. DB2 searches for log records in the following order: 1. Output buffer 2. Active log data set 3. Archive log data set If the log records are in the output buffer, DB2 reads the records directly from that buffer. If the log records are in the active or archive log, DB2 moves those log records into the input buffer used by the reading process (such as a recovery job or a rollback). It is always fastest for DB2 to read the log records from the active log rather than the archive log. Access to archived information can be delayed for a considerable length of time if a unit is unavailable or if a volume mount is required (for example, a tape mount).
702
Administration Guide
| | | | | | | |
Recommendations: To improve log read performance: v Archive to disk: If the archive log data set resides on disk, it can be shared by many log readers. In contrast, an archive on tape cannot be shared among log readers. Although it is always best to avoid reading archives altogether, if a process must read the archive, that process is serialized with anyone else who must read the archive tape volume. For example, every rollback that accesses the archive log must wait for any previous rollback work that accesses the same archive tape volume to complete. v Avoid device contention on the log data sets: Place your active log data sets on different volumes and I/O paths to avoid I/O contention in periods of high concurrent log read activity. When there are multiple concurrent readers of the active log, DB2 can ease contention by assigning some readers to a second copy of the log. Therefore, for performance and error recovery, use dual logging and place the active log data sets on a number of different volumes and I/O paths. Whenever possible, put data sets within a copy or within different copies on different volumes and I/O paths. Ensure that no data sets for the first copy of the log are on the same volume as data sets for the second copy of the log. v Stripe active log data sets: The active logs can be striped using DFSMS. Striping is a technique to improve the performance of data sets that are processed sequentially. Striping is achieved by splitting the data set into segments or stripes and spreading those stripes across multiple volumes. Striping can improve the maximum throughput log capacity and is most effective when there are many log records between commits. Striping is useful if you have a high I/O rate for the logs. Striping is needed more with ESCON channels than with the faster FICON channels.
Log capacity
The capacity that you specify for the active log affects DB2 performance significantly. If you specify a capacity that is too small, DB2 might need to access data in the archive log during rollback, restart, and recovery. Accessing an archive takes a considerable amount of time. The following subsystem parameters affect the capacity of the active log. In each case, increasing the value you specify for the parameter increases the capacity of the active log. See Part 2 of DB2 Installation Guide for more information on updating the active log parameters. The parameters are: v The NUMBER OF LOGS field on installation panel DSNTIPL controls the number of active log data sets you create. v The ARCHIVE LOG FREQ field on installation panel DSNTIPL is where you provide an estimate of how often active log data sets are copied to the archive log. v The UPDATE RATE field on installation panel DSNTIPL is where you provide an estimate of how many database changes (inserts, update, and deletes) you expect per hour. The DB2 installation CLIST uses UPDATE RATE and ARCHIVE LOG FREQ to calculate the data set size of each active log data set. v The CHECKPOINT FREQ field on installation panel DSNTIPN specifies the number of log records that DB2 writes between checkpoints or the number of minutes between checkpoints. Part 2 of DB2 Installation Guide goes into more detail on the relationships among these parameters and their effects on operations and performance.
Chapter 28. Improving resource utilization
703
High
Low
| | | | | | |
704
Administration Guide
v When you calculate the size of the active log data set, identify the longest unit of work in your application programs. For example, if a batch application program commits only once every 20 minutes, the active log data set should be twice as large as the update information produced during this period by all of the application programs that are running. Allow time for possible operator interventions, I/O errors, and tape drive shortages if off-loading to tape. DB2 supports up to 20 tape volumes for a single archive log data set. If your archive log data sets are under the control of DFSMShsm, also consider the Hierarchical Storage Manager recall time, if the data set has been migrated by Hierarchical Storage Manager. For more information on determining and setting the size of your active log data sets, refer to DB2 Installation Guide. v When archiving to disk, set the primary space quantity and block size for the archive log data set so that you can offload the active log data set without forcing the use of secondary extents in the archive log data set. This action avoids space abends when writing the archive log data set. v Make the number of records for the active log be divisible by the blocking factor of the archive log (disk or tape). DB2 always writes complete blocks when it creates the archive log copy of the active log data set. If you make the archive log blocking factor evenly divisible into the number of active log records, DB2 does not have to pad the archive log data set with nulls to fill the block. This action can prevent REPRO errors if you should ever have to REPRO the archive log back into the active log data set, such as during disaster recovery. To determine the blocking factor of the archive log, divide the value specified on the BLOCK SIZE field of installation panel DSNTIPA by 4096 (that is, BLOCK SIZE / 4096). Then modify the DSNTIJIN installation job so that the number of records in the DEFINE CLUSTER field for the active log data set is a multiple of the blocking factor. v If you offload to tape, consider adjusting the size of each of your active log data sets to contain the same amount of space as can be stored on a nearly full tape volume. This minimizes tape handling and volume mounts and maximizes the use of the tape resource. If you change the size of your active log data set to fit on one tape volume, remember that the bootstrap data set is copied to the tape volume along with the copy of the active log data set. Therefore, decrease the size of your active log data set to offset the space that is required on the archive tape for the bootstrap data set.
Utilities
The utility operations REORG and LOAD LOG(YES) cause all reorganized or loaded data to be logged. For example, if a table space contains 200 million rows of data, this data, along with control information, is logged when this table space is the object of a REORG utility job. If you use REORG with the DELETE option to eliminate old data in a table and run CHECK DATA to delete rows that are no longer valid in dependent tables, you can use LOG(NO) to control log volume. Recommendation: When populating a table with many records or reorganizing table spaces or indexes, specify LOG(NO) and take an inline copy or take a full image copy immediately after the LOAD or REORG.
Chapter 28. Improving resource utilization
705
Specify LOG(YES) when adding less than 1% of the total table space. This creates additional logging, but eliminates the need for a full image copy.
SQL
The amount of logging performed for applications depends on how much data is changed. Certain SQL statements are quite powerful, making it easy to modify a large amount of data with a single statement. These statements include: v INSERT with a fullselect v Mass deletes and mass updates (except for deleting all rows for a table in a segmented table space) v Data definition statements log an entire database descriptor for which the change was made. For very large DBDs, this can be a significant amount of logging. v Modification to a row that contains a LOB column defined as LOG YES. For nonsegmented table spaces, each of these statements results in the logging of all database data that change. For example, if a table contains 200 million rows of data, that data and control information are logged if all of the rows are deleted in a table using the SQL DELETE statement. No intermediate commit points are taken during this operation. For segmented table spaces, a mass delete results in the logging of the data of the deleted records when any of the following conditions are true: v The table is the parent table of a referential constraint. v The table is defined as DATA CAPTURE(CHANGES), which causes additional information to be logged for certain SQL operations. v A delete trigger is defined on the table. Recommendations: v For mass delete operations, consider using segmented table spaces. If segmented table spaces are not an option, create one table per table space and use LOAD REPLACE with no rows in the input data set to empty the entire table space. v For inserting a large amount of data, instead of using an SQL INSERT statement, use the LOAD utility with LOG(NO) and take an inline copy. v For updates, consider your workload when defining a tables columns. The amount of data that is logged for update depends on whether the row contains all fixed-length columns or not. For fixed-length non-compressed rows, changes are logged only from the beginning of the first updated column to the end of the last updated column. For varying-length rows, data is logged from the first changed byte to the end of the last updated column. (A varying-length row contains one or more varying-length columns.) To determine your workload type, read-intensive or update-intensive, check the log data rate. Use the formula in Calculating average log record size on page 704 to determine the average log size and divide that by 60 to get the average number of log bytes written per second. If you log less than 5 MB per second, the workload is read-intensive. If you log more than 5 MB per second, the workload is update-intensive. Table 109 on page 707 summarizes the recommendations for the type of row and type of workload you run.
706
Administration Guide
Table 109. Recommendations for database design to reduce log quantities Read-intensive workload Fixed-length non-compressed rows Varying-length rows Update-intensive workload Keep frequently updated columns close to each other. Keep varying-length columns at the Keep all frequently updated end of the row to improve read columns near the end of the row. performance. However, if only fixed-length columns will be updated, keep those columns close to each other at the beginning of the row.
v If you have many data definition statements (CREATE, ALTER, DROP) for a single database, issue them within a single unit of work to avoid logging the changed DBD for each data definition statement. However, be aware that the DBD is locked until the COMMIT is issued. v Use LOG NO for any LOBs that require frequent updating and for which the tradeoff of nonrecoverability of LOB data from the log is acceptable. (You can still use the RECOVER utility on LOB table spaces to recover control information that ensures physical consistency of the LOB table space.) Because LOB table spaces defined as LOG NO are nonrecoverable from the DB2 log, make a recovery plan for that data. For example, if you run batch updates, be sure to take an image copy after the updates are complete.
| |
| |
707
If the secondary allocation space is too small, the data set might have to be extended more times to satisfy those activities that need a large space. IFCID 0258 allows you to monitor data set extension activities by providing information, such as the primary allocation quantity, maximum data set size, high allocated space before and after extension activity, number of extents before and after the extend, maximum volumes of a VSAM data set, and number of volumes before and after the extend. Access IFCID 0258 in Statistics Class 3 (SC03) through an IFI READA request. See Chapter 3, Creating storage groups and managing DB2 data sets, on page 27 for more information about extending data sets.
708
Administration Guide
Compressing data can result in a higher processor cost, depending on the SQL work load. However, if you use IBMs synchronous data compression hardware, processor time is significantly less than if you use just the DB2-provided software simulation or an edit or field procedure to compress the data. Decompressing a row of data costs significantly less than compressing that same row. This rule applies regardless of whether the compression uses the synchronous data compression hardware or the software simulation that is built into DB2. The data access path that DB2 uses affects the processor cost for data compression. In general, the relative overhead of compression is higher for table space scans and is less costlier for index access. v I/O costs When rows are accessed sequentially, fewer I/Os might be required to access data that is stored in a compressed table space. However, there is a tradeoff between reduced I/O resource consumption and the extra processor cost for decoding the data. If random I/O is necessary to access the data, the number of I/Os will not decrease significantly, unless the associated buffer pool is larger than the table and the other applications require little concurrent buffer pool usage. Some types of data compress better than others. Data that contains hexadecimal characters or strings that occur with high frequency compresses quite well, while data that contains random byte frequencies might not compress at all. For example, textual and decimal data tends to compress well because certain byte strings occur frequently. v Data patterns The frequency of patterns in the data determines the compression savings. Data with many repeated strings (such as state and city names or numbers with sequences of zeros) results in good compression savings. v Table space design Each table space or partition that contains compressed data has a compression dictionary, which is built by using the LOAD utility with the REPLACE or RESUME NO options or the REORG TABLESPACE utility without the KEEPDICTIONARY option for either utility. A dictionary is also built for LOAD RESUME YES without KEEPDICTIONARY if the table space has no rows. The dictionary contains a fixed number of entries, usually 4096, and resides with the data. The dictionary content is based on the data at the time it was built, and does not change unless the dictionary is rebuilt or recovered, or compression is disabled with ALTER TABLESPACE. If you use LOAD to build the compression dictionary, the first n rows loaded in the table space determine the contents of the dictionary. The value of n is determined by how much your data can be compressed. If you have a table space with more than one table and the data used to build the dictionary comes from only one or a few of those tables, the data compression might not be optimal for the remaining tables. Therefore, put a table you want to compress into a table space by itself, or into a table space that only contains tables with similar kinds of data. REORG uses a sampling technique to build the dictionary. This technique uses the first n rows from the table space and then continues to sample rows for the remainder of the UNLOAD phase. In most cases, this sampling technique produces a better dictionary than does LOAD, and using REORG might produce better results for table spaces that contain tables with dissimilar kinds of data.
709
For more information about using LOAD or REORG to create a compression dictionary, see Part 2 of DB2 Utility Guide and Reference. v Existing exit routines An exit routine is executed before compressing or after decompressing, so you can use DB2 data compression with your existing exit routines. However, do not use DB2 data compression in conjunction with DSN8HUFF. (DSN8HUFF is a sample edit routine that compresses data using the Huffman algorithm, which is provided with DB2). This adds little additional compression at the cost of significant extra CPU processing. v Logging effects If a data row is compressed, all data that is logged because of SQL changes to that data is compressed. Thus, you can expect less logging for insertions and deletions; the amount of logging for updates varies. Applications that are sensitive to log-related resources can experience some benefit with compressed data. External routines that read the DB2 log cannot interpret compressed data without access to the compression dictionary that was in effect when the data was compressed. However, using IFCID 306, you can cause DB2 to write log records of compressed data in decompressed format. You can retrieve those decompressed records by using the IFI function READS. v Distributed data DB2 decompresses data before transmitting it to VTAM.
Tuning recommendation
In some cases, using compressed data results in an increase in the number of getpages, lock requests, and synchronous read I/Os. Sometimes, updated compressed rows cannot fit in the home page, and they must be stored in the overflow page. This can cause additional getpage and lock requests. If a page contains compressed fixed-length rows with no free space, an updated row probably has to be stored in the overflow page. To avoid the potential problem of more getpage and lock requests, add more free space within the page. Start with 10% additional free space and adjust further, as needed. If, for example, 10% free space was used without compression, then start with 20% free space with compression for most cases. This recommendation is especially important for data that is heavily updated.
710
Administration Guide
how well the data is compressed and how much space is saved. (REORG with the KEEPDICTIONARY option does not produce the report.) v Catalog statistics In addition to the compression reports, these columns in the catalog tables contain information about data compression: PAGESAVE column of the SYSIBM.SYSTABLEPART tells you the percentage of pages that are saved by compressing the data. PCTROWCOMP columns of SYSIBM.SYSTABLES and SYSIBM.SYSTABSTATS tells you the percentage of the rows that were compressed in the table or partition the last time RUNSTATS was run. Use the RUNSTATS utility to update these catalog columns. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Having an ascending index on C1 would not have prevented a sort to order the data. To avoid the sort, you needed a descending index on C1. Beginning in Version 8, DB2 can scan an index either forwards or backwards, which can eliminate the need to have indexes with the same columns but with different ascending and descending characteristics.
711
| | | | | | | | | | | | | | | | | | | | | | |
For DB2 to be able to scan an index backwards, the index must be defined on the same columns as the ORDER BY and the ordering must be exactly opposite of what is requested in the ORDER BY. For example, if an index is defined as C1 DESC, C2 ASC, then DB2 can use: v A forward scan of the index for ORDER BY C1 DESC, C2 ASC v A backward scan of the index for ORDER BY C1 ASC, C2 DESC However, DB2 would need to sort for either of these two ORDER BY clauses: v ORDER BY C1 ASC, C2 ASC v ORDER BY C1 DESC, C2 DESC
712
Administration Guide
v Make sorting more efficient, see Improving the performance of sort processing on page 691. v Reduce the need to sort, see Overview of index access on page 952. v Determine the amount of sort activity, see Determining sort activity on page 977. # # # # # # # Release unused thread storage: If you experience virtual storage constraints due to thread storage, consider having DB2 periodically free unused thread storage. To release unused storage threads, specify YES for the CONTRACT THREAD STG field on installation panel DSNTIPE. However, you should use this option only when your DB2 subsystem has many long-running threads and your virtual storage is constrained. To determine the level of thread storage on your subsystem, see IFCID 0225 or QW0225AL in statistics trace class 6. Recommendation: Provide for pooled threads: As described in Using threads in INACTIVE MODE for DRDA-only connections on page 742, distributed threads that are allowed to be pooled use less storage than inactive database access threads. On a per connection basis, pooled threads use even less storage than inactive database access threads. Inactive database access threads require around 70 KB of storage in the ssnmDBM1 address space per thread, and each thread must be paired with a connection. In contrast, pooled threads are not directly linked with connections. While each pooled thread requires 200 KB in the ssnmDBM1 address space, each thread does not require a corresponding connection, and each connection uses only about 8 KB total in the DDF address space and the extended common storage area. Ensure ECSA size is adequate: The extended common service area (ECSA) is a system area that DB2 shares with other programs. Shortage of ECSA at the system level leads to use of the common service area. DB2 places some load modules and data into the common service area. These modules require primary addressability to any address space, including the applications address space. Some control blocks are obtained from common storage and require global addressability. For more information, see DB2 Installation Guide. Ensure EDM pool space is being used efficiently: Monitor your use of EDM pool storage using DB2 statistics and see Tips for managing EDM storage on page 688. Use less buffer pool storage: Using fewer and smaller buffer pools reduces the amount of real storage space DB2 requires. Buffer pool size can also affect the number of I/O operations performed; the smaller the buffer pool, the more I/O operations needed. Also, some SQL operations, such as joins, can create a result row that will not fit on a 4-KB page. For information about this, see Making buffer pools large enough for the workload on page 660. Control maximum number of LE tokens: When a function is executed and needs to access storage used by Language Environment, it obtains an LE token from the pool. Language Environment provides a common run-time environment for programming languages. A token is taken each time one of the following functions is executed: v Log functions (LOG, LN, LOG10)
713
v Trigonometry functions (ACOS, ASIN, ATAN, ATANH, ATAN2, COS, COSH, SIN, SINH, TAN, and TANH) v EXP v POWER v RAND v ADD_MONTHS v LAST_DAY v NEXT_DAY v v v v v ROUND_TIMESTAMP TRUNC_TIMESTAMP LOWER TRANSLATE UPPER
On completion of the call to LE, the token is returned to the pool. The MAXIMUM LE TOKENS (LEMAX) field on installation panel DSNTIP7 controls the maximum number of LE tokens that are active at any time. The LEMAX default value is 20 with a range of 0 to 50. If the value is zero, no tokens are available. If a large number of functions are executing at the same time, all the token may be used. Thus, if a statement needs a token and none is available, the statement is queued. If the statistics trace QLENTRDY is very large, indicating a delay for an application because an LE token is not immediately available, the LEMAX may be too small. If the statistics trace QLETIMEW for cumulative time spent is very large, the LEMAX may be too small. Increase the number of tokens for the MAXIMUM LE TOKENS field on installation panel DSNTIP7.
Real storage
| | | | | | Real storage refers to the processor storage where program instructions reside while they are executing. It also refers to where data is held, for example, data in DB2 buffer pools that has not been paged out to auxiliary storage, the EDM pools, and the sort pool. To be used, data must either reside or be brought into processor storage or processor special registers. The maximum amount of real storage that one DB2 subsystem can use is the real storage of the processor, although other limitations may be encountered first. DB2s large capacity for buffers in real storage and its write avoidance and sequential access techniques allow applications to avoid a substantial amount of read and write I/O, combining single accesses into sequential access, so that the disk devices are used more effectively.
714
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Storage devices
A wide range of storage devices exist. This section presents information about newer storage servers, such as the IBM Enterprise Storage Server (ESS), and older storage devices types, such as RVA and 3390.
Storage servers
An I/O subsystem typically consists of many storage disks, which are housed in storage servers, for example, the IBM Enterprise Storage Server (ESS). Storage servers provide increased functionality and performance over that of Just a Bunch of Disks (JBOD) technology. Cache is one of the additional functions. Cache acts as a secondary buffer as data is moved between real storage and disk. Storing the same data in processor storage and the cache is not useful. To be useful, the cache must be significantly larger than the buffers in real storage, store different data, or provide another performance advantage. ESS and many other new storage servers use large caches and always prestage the data in the cache. You do not need to actively manage the cache in the newer storage servers as you must do with older storage device types. With ESS and other new storage servers, disk performance does not generally affect sequential I/O performance. The measure of disk speed in terms of RPM (revolutions per minute) is relevant only if the cache hit ratio is low and the I/O rate is very high. If the I/O rate per disk is proportional to the disk size, small disks perform better than large disks. Large disks are very efficient for storing infrequently accessed data. As with cache, spreading the data across more disks is always better. Storage servers and channel subsystems: The channels can be more of a bottleneck than any other component of the I/O subsystem with ESS and other newer storage servers. The degree of I/O parallelism that can be sustained efficiently is largely a function of the number of channels. In general, more channels mean better performance. However, not all channels are alike. ESCON channels, which used to be the predominant channel type, have a maximum instantaneous data transfer rate of approximately 17 MB per second. FICON channels currently have a speed of 200 MB per second. FICON is the z/OS equivalent of Open Systems Fibre Channel Protocol (FCP). The FICON speed is bidirectional, theoretically allowing 200 MB per second to be sustained in both directions. Channel adaptors in the host processor and the storage server limit the actual speed. The FICON channels in the zSeries 900 servers are faster than those in the prior processors.
Chapter 28. Improving resource utilization
715
| # # # # # # # # # | | | | | | | | | | | | | | | | | | | | |
Storage servers and advanced features: ESS offers many advanced features to further boost performance. Other storage servers may offer similar function. Extended Address Volumes (EAV): With extended address volumes (EAV), you can store more data that is in VSAM data sets on a single volume than you can store on non-extended address volumes. However, the maximum amount of data that you can store in a single DB2 table space or index space is the same for extended and non-extended address volumes. The same DB2 data sets might use more space on extended address volumes than on non-extended address volumes because space allocations in the extended area are multiples of 21 cylinders on extended address volumes. Parallel Access Volumes (PAV): The parallel access volumes (PAV) feature allows multiple concurrent I/Os on a given device when the I/O requests originate from the same system. PAVs make storing multiple partitions on the same volume with almost no loss of performance possible. In older disk subsystems, if more than one partition is placed on the same volume (intentionally or otherwise), attempts to read the partitions result in contention, which shows up as I/O subsystem queue (IOSQ) time. Without PAVs, poor placement of a single data set can almost double the elapsed time of a parallel query. Multiple Allegiance: The multiple allegiance feature allows multiple active concurrent I/Os on a given device when the I/O requests originate from different systems. PAVs and multiple allegiance dramatically improve I/O performance for parallel work on the same volume by nearly eliminating IOSQ or PEND time and drastically lowering elapsed time for transactions and queries. Flashcopy: The Flashcopy feature provides for fast copying of full volumes. After an initialization period is complete, the logical copy is considered complete but the physical movement of the data is deferred. Peer-to-Peer Remote Copy (PPRC): The PPRC and PPRC XD (Extended Distance) provides a faster method for recovering DB2 subsystems at a remote site in the event of a disaster at the local site. For more information about using PPRC and PPRC XD, see Backing up with RVA storage control or Enterprise Storage Server on page 495.
716
Administration Guide
specify BYPASS on installation panel DSNTIPE or use DFSMS controls to prevent the use of the cache during sort processing. Separate units for sort work can give better performance.
717
v A high velocity goal for a service class whose name you define, such as PRODREGN, for the following: DB2 (all address spaces, except for the DB2-established stored procedures address space) Important: The DB2-started address spaces, including ssnmDBM1, ssnmMSTR, and ssnmDIST, must have the same service class. CICS (all region types) IMS (all region types except BMPs) The velocity goals for CICS and IMS regions are only important during startup or restart. After transactions begin running, WLM ignores the CICS or IMS velocity goals and assigns priorities based on the goals of the transactions that are running in the regions. A high velocity goal is good for ensuring that startups and restarts are performed as quickly as possible. Similarly, when you set response time goals for DDF threads or for stored procedures in a WLM-established address space, the only work controlled by the DDF or stored procedure velocity goals are the DB2 service tasks (work performed for DB2 that cannot be attributed to a single user). The user work runs under separate goals for the enclave, as described in Using z/OS workload management to set performance objectives on page 745. For the DB2-established stored procedures address space, use a velocity goal that reflects the requirements of the stored procedures in comparison to other application work. Depending on what type of distributed work you do, this might be equal to or lower than the goal for PRODREGN. IMS BMPs can be treated along with other batch jobs or given a velocity goal, depending on what business and functional requirements you have at your site. Consider the following other workload management considerations: v IRLM must be eligible for the SYSSTC service class. To make IRLM eligible for SYSSTC, do not classify IRLM to one of your own service classes. v If you need to change a goal, change the velocity between 5 and 10%. Velocity goals do not translate directly to priority. Higher velocity tends to have higher priority, but this is not always the case. v WLM can assign I/O priority (based on I/O delays) separately from processor priority. See How DB2 assigns I/O priorities for information about how read and write I/O priorities are determined. v z/OS workload management dynamically manages storage isolation to meet the goals you set.
718
Administration Guide
Table 110. How read I/O priority is determined (continued) Request type Synchronous reads Prefetch reads Enclave priority
Table 111 describes to which enclave or address space DB2 associates I/O write requests.
Table 111. How write I/O priority is determined Request type Local DDF Synchronous writes Applications address space DDF address space Deferred writes ssnmDBM1 address space ssnmDBM1 address space
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
719
# # # # # # # # # # # # # # # # # # # #
v External stored procedures v Triggers or functions that join the enclave SRB using a TCB v DDF server threads that use SNA to connect to DB2 Using trace to monitor zIIP usage: The DB2 accounting trace records provide information related to application programs including processor resources consumed by the application. Accumulated zIIP CPU time is not accumulated in the existing class 1, class 2, and class 7 accounting fields. Using RMF to monitor zIIP usage: The Resource Measurement Facility (RMF) provides information on zIIP usage to help you identify when to consider purchasing a zIIP or adding more zIIPs. Also, SMF Type 72 records contain information on zIIP usage and fields in SMF Type 30 records let you know how much time is spent on zIIPs, as well as how much time was spent executing zIIP eligible work on standard processors. For more details about the new RMF support for zIIPs, refer to the z/OS MVS Initialization and Tuning Reference. Considerations for data sharing: In a data sharing environment, where DB2 Version 8 is part of the data sharing group, DRDA work running in the Version 7 member cannot run on the zIIP. Work running on the Version 8 member will run on the zIIP. IBM zIIP capacity is not included in the sysplex routing information returned to DB2 Connect Server. Accordingly, members without zIIPs might be favored over members with zIIPs.
Limiting resources for Time limit on job or step (through each job z/OS system settings or JCL) Limiting resources for Time limit for TSO logon TSO sessions Limiting resources for IMS and CICS controls IMS and CICS Limiting resources for ASUTIME column of a stored procedure SYSIBM.SYSROUTINES catalog table.
720
Administration Guide
Table 112. Controlling the use of resources (continued) Objective Limiting dynamic statement execution time Reducing locking contention Evaluating long-term resource usage Predicting resource consumption How to accomplish it QMF governor and DB2 resource limit facility DB2 locking parameters, DISPLAY DB LOCKS, lock trace data, database design Accounting trace data, OMEGAMON reports DB2 EXPLAIN statement, Visual Explain, DB2 Estimator, predictive governing capability Where it is described Resource limit facility (governor) on page 722 Chapter 31, Improving concurrency, on page 813 OMEGAMON on page 1202 Chapter 34, Using EXPLAIN to improve SQL performance, on page 931 and Predictive governing on page 730 Disabling query parallelism on page 1005
Prioritizing resources
z/OS workload management (WLM) controls the execution of DB2 work based on the priorities that you set. See z/OS MVS Initialization and Tuning Guide for more information about setting priorities on work. | | | | | | | | In CICS environments without the Open Transaction Environment (OTE) function, DB2 work and application work is performed in different tasks. DB2 work is managed at the subtask level. With CICS OTE, DB2 work and application work can be performed in the same task. You can manage the DB2 subtasks through various settings in the CICS resource definition online (RDO). Without OTE, some overhead is incurred for each task switch. Therefore, depending on the SQL activity, CICS OTE can improve performance significantly because of the reduction of needing to switch tasks. In other environments such as batch and TSO, which typically have a single task requesting DB2 services, the task-level processor dispatching priority is irrelevant. Access to processor and I/O resources for synchronous portions of the request is governed solely by WLM.
721
than for an individual query or a single program. If you want to control the amount of resources used for an entire TSO session, rather than the amount used by a single query, then use this control. You can find more information about setting the resource limit for a TSO session in these manuals: v z/OS TSO/E Programming Guide v z/OS TSO/E Customization
722
Administration Guide
# #
Note: No limits apply to primary or secondary authorization IDs with installation SYSADM or installation SYSOPR authority. Data sharing: See DB2 Data Sharing: Planning and Administration for information about special considerations for using the resource limit facility in a data sharing group. This section includes the following topics: v Using resource limit tables (RLSTs) v Governing dynamic queries on page 728 v Restricting bind operations on page 732 v Restricting parallelism modes on page 733
Creating an RLST
Resource limit specification tables can reside in any database; however, because a database has some special attributes while the resource limit facility is active, it is best to put RLSTs in their own database. When you install DB2, installation job DSNTIJSG creates a database, table space, table, and descending index for the resource limit specification. You can tailor those statements. For more information about job DSNTIJSG, see Part 2 of DB2 Installation Guide. To create a new resource limit specification table, use the following statements, also included in installation job DSNTIJSG. You must have sufficient authority to define objects in the DSNRLST database and to specify authid, which is the authorization ID specified on field RESOURCE AUTHID of installation panel DSNTIPP. Creating the table: Use the following statement: |
CREATE TABLE authid.DSNRLSTxx (AUTHID VARCHAR(128) NOT NULL WITH DEFAULT, PLANNAME CHAR(8) NOT NULL WITH DEFAULT, ASUTIME INTEGER, -------3-column format -------LUNAME CHAR(8) NOT NULL WITH DEFAULT, -------4-column format -------RLFFUNC CHAR(1) NOT NULL WITH DEFAULT, RLFBIND CHAR(1) NOT NULL WITH DEFAULT, RLFCOLLN VARCHAR(128) NOT NULL WITH DEFAULT, RLFPKG VARCHAR(128) NOT NULL WITH DEFAULT), -------8-column format -------RLFASUERR INTEGER, RLFASUWARN INTEGER, RLF_CATEGORY_B CHAR(1) NOT NULL WITH DEFAULT) -------11-column format -------IN DSNRLST.DSNRLSxx;
| #
723
The name of the table is authid.DSNRLSTxx, where xx is any two-character alphanumeric value, and authid is specified when DB2 is installed. Because the two characters xx must be entered as part of the START command, they must be alphanumericno special or DBCS characters. All future column names defined by IBM will appear as RLFxxxxx. To avoid future naming conflicts, begin your own column names with characters other than RLF. Creating the index: To create an index for the 11-column format, use the following SQL:
CREATE UNIQUE INDEX authid.DSNARLxx ON authid.DSNRLSTxx (RLFFUNC, AUTHID DESC, PLANNAME DESC, RLFCOLLN DESC, RLFPKG DESC, LUNAME DESC) CLUSTER CLOSE NO;
The xx in the index name (DSNARLxx) must match the xx in the table name (DSNRLSTxx) and it must be a descending index. Populating the RLST: Use the SQL statements INSERT, UPDATE, and DELETE to populate the resource limit specification table. The limit that exists when a job makes its first dynamic SELECT, INSERT, UPDATE, or DELETE statement applies throughout the life of the job. If you update the resource limit specification table while a job is executing, that jobs limit does not change; instead, the updates are effective for all new jobs and for those that have not issued their first dynamic SELECT, INSERT, UPDATE, or DELETE statement. To insert, update, or delete from the resource limit specification table, you need only the usual table privileges on the RLST. No higher authority is required. Starting and stopping the RLST: Activate any particular RLST by using the DB2 command START RLIMIT ID=xx where xx is the two-character identifier that you specified on the name DSNRLSTxx. This command gives you the flexibility to use a different RLST at different times; however, only one RLST can be active at a time. Example: You can use different RLSTs for the day shift and the evening shift, as shown in Table 113 and Table 114.
Table 113. Example of RLST for the day shift AUTHID BADUSER ROBYN PLANA PLANNAME ASUTIME LUNAME 0 LUDBD1 100000 LUDBD1 300000 LUDBD1 50000 LUDBD1 Table 114. Example of RLST for the night shift. During the night shift AUTHID, ROBYN, and all PLANA users from LUDBD1 run without limit. AUTHID BADUSER ROBYN PLANA PLANNAME ASUTIME LUNAME 0 LUDBD1 NULL LUDBD1 NULL LUDBD1 50000 LUDBD1
724
Administration Guide
At installation time, you can specify a default RLST to be used each time that DB2 is restarted. For more information on resource limit facility subsystem parameters, see Part 2 of DB2 Installation Guide. If the governor is active and you restart it without stopping it, any jobs that are active continue to use their original limits, and all new jobs use the limits in the new table. If you stop the governor while a job is executing, the job runs with no limit, but its processing time continues to accumulate. If you later restart the governor, the new limit takes effect for an active job only when the job passes one of several internal checkpoints. A typical dynamic statement, which builds a result table and fetches from it, passes those checkpoints at intervals that can range from moments to hours. As a result, your change to the governor might not stop an active job within the time you expect. Use the DB2 command CANCEL THREAD to stop an active job that does not pick up the new limit when you restart the governor. Restricted activity on the RLST: While the governor is active, you cannot execute the following SQL statements on the RLST, or the table space and database in which the RLST is contained: v DROP DATABASE v DROP INDEX v DROP TABLE v DROP TABLESPACE v RENAME TABLE You cannot stop a database or table space that contains an active RLST; nor can you start the database or table space with ACCESS(UT).
# #
725
statement is issued from a DBRM bound in a plan, not a package; otherwise, DB2 does not find this row. If the RLFFUNC column contains a function for packages ('1,' '2,' or '7'), this column must be blank; if it is not blank, the row is ignored. ASUTIME The number of processor service units allowed for any single dynamic SELECT, INSERT, UPDATE, or DELETE statement. Use this column for reactive governing. Other possible values and their meanings are: null No limit
0 (zero) or a negative value No dynamic SELECT, INSERT, UPDATE, or DELETE statements are permitted. The governor samples the processing time in service units. Service units are independent of processor changes. The processing time for a particular SQL statement varies according to the processor on which it is executed, but the service units required remains roughly constant. The service units consumed are not exact between different processors because the calculations for service units are dependent on measurement averages performed before new processors are announced. A relative metric is used so that the RLST values do not need to be modified when processors are changed. However, in some cases, DB2 workloads can differ from the measurement averages. In these cases, RLST value changes may be necessary. For information about how to calculate service units, see Calculating service units on page 732. LUNAME The LU name of the location where the request originated. A blank value in this column represents the local location, not all locations. The value PUBLIC represents all of the DBMS locations in the network; these locations do not need to be DB2 subsystems. PUBLIC is the only value for TCP/IP connections. RLFFUNC Specifies how the row is used. The values that have an effect are: blank The row reactively governs dynamic SELECT, INSERT, UPDATE, or DELETE statements by plan name. 1 2 3 4 5 6 7 The row reactively governs bind operations. The row reactively governs dynamic SELECT, INSERT, UPDATE, or DELETE statements by package or collection name. The row disables query I/O parallelism. The row disables query CP parallelism. The row disables Sysplex query parallelism. The row predictively governs dynamic SELECT, INSERT, UPDATE, or DELETE statements by plan name. The row predictively governs dynamic SELECT, INSERT, UPDATE, or DELETE statements by package or collection name.
726
Administration Guide
RLFBIND Shows whether bind operations are allowed. An 'N' implies that bind operations are not allowed. Any other value means that bind operations are allowed. This column is used only if RLFFUNC is set to '1'. RLFCOLLN Specifies a package collection. A blank value in this column means that the row applies to all package collections from the location that is specified in LUNAME. Qualify by collection name only if the dynamic statement is issued from a package; otherwise DB2 does not find this row. If RLFFUNC=blank, '1,' or '6', then RLFCOLLN must be blank. RLFPKG Specifies a package name. A blank value in this column means that the row applies to all packages from the location that is specified in LUNAME. Qualify by package name only if the dynamic statement is issued from a package; otherwise DB2 does not find this row. If RLFFUNC=blank, '1', or '6', then RLFPKG must be blank. RLFASUERR Used for predictive governing (RLFFUNC= '6' or '7'), and only for statements that are in cost category A. The error threshold number of system resource manager processor service units allowed for a single dynamic SELECT, INSERT, UPDATE, or DELETE statement. If the predicted processor cost (in service units) is greater than the error threshold, an SQLCODE -495 is returned to the application. Other possible values and their effects are: null No error threshold
0 (zero) or a negative value All dynamic SELECT, INSERT, UPDATE, or DELETE statements receive SQLCODE -495. RLFASUWARN Used for predictive governing (RELFFUNC= '6' or '7'), and only for statements that are in cost category A. The warning threshold number of processor service units that are allowed for a single dynamic SELECT, INSERT, UPDATE, or DELETE statement. If the predicted processor cost (in service units) is greater than the warning threshold, an SQLCODE +495 is returned to the application. Other possible values and their effects are: null No warning threshold
0 (zero) or a negative value All dynamic SELECT, INSERT, UPDATE, or DELETE statements receive SQLCODE +495. Important: Make sure the value for RLFASUWARN is less than that for RLFASUERR. If the warning value is higher, the warning is never reported. The error takes precedence over the warning. RLF_CATEGORY_B Used for predictive governing (RLFFUNC='6' or '7'). Tells the governor the default action to take when the cost estimate for a given statement falls into cost category B, which means that the predicted cost is indeterminate and probably too low. You can tell if a statement is in cost category B by running EXPLAIN and checking the COST_CATEGORY column of the DSN_STATEMNT_TABLE.
Chapter 28. Improving resource utilization
727
The acceptable values are: blank By default, prepare and execute the SQL statement. Y N W Prepare and execute the SQL statement. Do not prepare or execute the SQL statement. Return SQLCODE -495 to the application. Complete the prepare, return SQLCODE +495, and allow the application logic to decide whether to execute the SQL statement or not.
Any statement that exceeds a limit you set in the RLST terminates with a -905 SQLCODE and a corresponding '57014' SQLSTATE. You can establish a single limit for all users, different limits for individual users, or both. Limits do not apply to primary or secondary authorization IDs with installation SYSADM or installation SYSOPR authority. For queries entering DB2 from a remote site, the local site limits are used. Specifying predictive governing: Specify either of the following values in the RLFFUNC column of the RLST: 6 7 Govern by plan name Govern by package name
See Qualifying rows in the RLST for more information about how to qualify rows in the RLST. See Predictive governing on page 730 for more information about using predictive governing. This section includes the following topics: v Qualifying rows in the RLST v Predictive governing on page 730 v Combining reactive and predictive governing on page 731 v Governing statements from a remote site on page 731 v Calculating service units on page 732
728
Administration Guide
5. No row match Governing by plan or package name: Governing by plan name and package name are mutually exclusive. v Plan name The RLF governs the DBRMs in the MEMBER list specified on the BIND PLAN command. The RLFFUNC, RLFCOLLN, and RLFPKG columns must contain blanks. Table 115 is an example of this:
Table 115. Qualifying rows by plan name RLFFUNC (blank) (blank) (blank) AUTHID JOE (blank) (blank) PLANNAME PLANA WSPLAN (blank) LUNAME (blank) SAN_JOSE PUBLIC ASUTIME (null) 15000 10000
The first row in Table 115 shows that when Joe runs PLANA at the local location, there are no limits for any dynamic statements in that plan. The second row shows that if anyone runs WSPLAN from SAN_JOSE, the dynamic statements in that plan are restricted to 15 000 SUs each. The third row is entered as a cap for any unknown authorization IDs or plan names from any location in the network, including the local location. (An alternative would be to let the default values on installation panel DSNTIPR and DSNTIPO serve as caps.) v Collection and package name The resource limit facility governs the packages used during the execution of the SQL application program. PLANNAME must contain blank, and RLFFUNC must contain 2.Table 116 is an example of this:
Table 116. Qualifying rows by collection or package name RLFFUNC AUTHID JOE (blank) RLFCOLLN DSNESPCS (blank) RLFPKG (blank) DSNESM68 LUNAME (blank) PUBLIC ASUTIME 40000 15000
# 2 # 2 #
The first row in Table 116 shows that when Joe runs any package in collection DSNESPCS from the local location, dynamic statements are restricted to 40 000 SUs. The second row indicates that if anyone from any location (including the local location) runs SPUFI packageDSNESM68, dynamic statements are limited to 15 000 SUs. Governing by LU name: Specify an originating systems LU name in the LUNAME column, or, specify PUBLIC for all remote LUs. An LUNAME with a value other than PUBLIC takes precedence over PUBLIC. If you leave LUNAME blank, DB2 assumes that you mean the local location only and none of your incoming distributed requests will qualify. PUBLIC is the only value for TCP/IP connections. Setting a default for when no row matches: If no row in the RLST matches the currently executing statement, then DB2 uses the default set on the RLST ACCESS ERROR field of installation panel DSNTIPO (for queries that originate locally) or DSNTIPR (for queries that originate remotely). This default applies to reactive governing only. For predictive governing, if no row matches, then no predictive governing occurs.
Chapter 28. Improving resource utilization
729
Predictive governing
DB2s predictive governing capability has an advantage over the reactive governor in that it avoids wasting processing resources by giving you the ability to prevent a query from running when it appears that it exceeds processing limits. With the reactive governor, those resources are already used before the query is stopped. See Figure 77 for an overview of how predictive governing works.
Calculate cost (during PREPARE)
N
Category A?
Category B
Y 'W'
-495 SQLCODE
RLF CATEGORY B?
'N'
'Y'
+495 SQLCODE
N N
Cost > RLFASUWARN? Application decides Execute -495 SQLCODE
Execute
Y
+495 SQLCODE Application decides
At prepare time for a dynamic SELECT, INSERT UPDATE, or DELETE statement, DB2 searches the active RLST to determine if the processor cost estimate exceeds the error or warning threshold that you set in RLFASUWARN and RLFASUERR columns for that statement. DB2 compares the cost estimate for a statement to the thresholds you set, and the following actions occur: v If the cost estimate is in cost category A and the error threshold is exceeded, DB2 returns a -495 SQLCODE to the application, and the statement is not prepared or run. v If the estimate is in cost category A and the warning threshold is exceeded, a +495 SQLCODE is returned at prepare time. The prepare is completed, and the application or user decides whether to run the statement. v If the estimate is in cost category B, DB2 takes the action you specify in the RLF_CATEGORY_B column; that is, it either prepares and executes the statement, does not prepare or execute the statement, or returns a warning SQLCODE, which lets the application decide what to do. v If the estimate is in cost category B and the warning threshold is exceeded, a +495 SQLCODE is returned at prepare time. The prepare is completed, and the application or user decides whether to run the statement. Example: Table 117 on page 731 is an RLST with two rows that use predictive governing.
730
Administration Guide
Table 117. Predictive governing example RLFFUNC AUTHID 7 7 (blank) (blank) RLFCOLLN COLL1 COLL2 RLFPKG C1PKG1 C2PKG1 RLFASUWARN 900 900 RLFASUERR 1500 1500 RLF_CATEGORY_B Y W
The rows in the RLST for this example cause DB2 to act as follows for all dynamic INSERT, UPDATE, DELETE, and SELECT statements in the packages listed in this table (C1PKG1 and C2PKG1): v Statements in cost category A that are predicted to be less than 900 SUs will execute. v Statements in cost category A that are predicted to be between 900 and 1500 SUs receive a +495 SQLCODE. v Statements in cost category A that are predicted to be greater than 1500 SUs receive SQLCODE -495, and the statement is not executed. Cost category B: The two rows differ only in how statements in cost category B are treated. For C1PKG1, the statement will execute. For C2PKG2, the statements receive a +495 SQLCODE and the user or application must decide whether to execute the statement.
The rows in the RLST for this example cause DB2 to act as follows for a dynamic SQL statement that runs under PLANA: Predictive mode: v If the statement is in COST_CATEGORY A and the cost estimate is greater than 1000 SUs, USER1 receives SQLCODE -495 and the statement is not executed. v If the statement is in COST_CATEGORY A and the cost estimate is greater than 800 SUs but less than 1000 SUs, USER1 receives SQLCODE +495. v If the statement is in COST_CATEGORY B, USER1 receives SQLCODE +495. Reactive mode: In either of the following cases, a statement is limited to 1100 SUs: v The cost estimate for a statement in COST_CATEGORY A is less than 800 SUs v The cost estimate for a COST_CATEGORY A is greater than 800 and less than 1000 or is in COST_CATEGORY B and the user chooses to execute the statement
731
v For dynamic statements coming from requesters using DRDA protocols, you must govern by package name (RLFFUNC=2 or RLFFUNC=7), which means that PLANNAME must be blank. Specify the originating systems LU name in the LUNAME column, or specify PUBLIC for all remote LUs. If you leave LUNAME blank, DB2 assumes that you mean the local location only and none of your incoming distributed requests will qualify. v For dynamic statements coming from requesters using DB2 private protocol, you must govern by plan name (RLFFUNC=(blank) or 6), which means that RLFCOLLN and RLFPKG must be blank. Specify the originating systems LU name in the LU column, or specify PUBLIC for all remote LUs. Again, a value other than PUBLIC takes precedence over PUBLIC. PLANNAME can be blank or the name of the plan created at the requesters location. RLFPKG and RLFCOLLN must be blank. v For dynamic statements coming from requesters using TCP/IP, you cannot specify the LU name. You must use PUBLIC. v If no row is present in the RLST to govern access from a remote location, the limit is the default set on the RLST ACCESS ERROR field of installation panel DSNTIPR.
The value for service units per second depends on the processor model. You can find this value for your processor model in z/OS MVS Initialization and Tuning Guide, where SRM is discussed. For example, if processor A is rated at 900 service units per second and you do not want any single dynamic SQL statement to use more than 10 seconds of processor time, you could set ASUTIME as follows:
ASUTIME time = 10 seconds 900 service units/second = 9000 service units
Later, you could upgrade to processor B, which is rated at 1000 service units per second. If the value you set for ASUTIME remains the same (9000 service units), your dynamic SQL is only allowed 9 seconds for processing time but an equivalent number of processor service units:
ASUTIME = 9 seconds 1000 service units/second = 9000 service units
As this example illustrates, after you establish an ASUTIME (or RLFASUWARN or RLFASUERR) for your current processor, there is no need to modify it when you change processors.
732
Administration Guide
3. LUNAME matches A value of PUBLIC for LUNAME applies to all authorization IDs at all locations, while a blank LUNAME governs bind operations for IDs at the local location only. 4. If no entry matches, or if your RLST cannot be read, then the resource limit facility does not disable bind operations.
Example
Table 119 is an example of an RLST that disables bind operations for all but three authorization IDs. Notice that BINDER from the local site is able to bind but that BINDER from San Francisco is not able to bind. Everyone else from all locations, including the local one, is disabled from doing binds.
Table 119. Restricting bind operations RLFFUNC 1 1 1 1 1 AUTHID BINDGUY NIGHTBND (blank) BINDER BINDER LUNAME PUBLIC PUBLIC PUBLIC SANFRAN (blank) N N RLFBIND
733
If the RLST in Table 120 on page 733 is active, it causes the following effects: v Disables I/O parallelism for all dynamic queries in IOHOG. v Disables CP parallelism and Sysplex query parallelism for all dynamic queries in CPUHOG.
734
Administration Guide
735
DSNTIPE. These values limit the number of connections to DB2. The number of threads and connections allowed affects the amount of work that DB2 can process.
736
Administration Guide
EDM pool size: The size of the EDM pool influences the number of I/Os needed to load the control structures necessary to process the plan or package. To avoid a large number of allocation I/Os, the EDM pool must be large enough to contain the structures that are needed. See Tuning EDM storage on page 685 for more information.
737
of rows that are evaluated during the second stage (stage 2) of of getpage requests that are issued to enforce referential constraints of rows that are deleted or set null to enforce referential constraints of inserted rows
From a system performance perspective, the most important factor in the performance of SQL statement execution is the size of the database buffer pool. If the buffer pool is large enough, some index and data pages can remain there and can be accessed again without an additional I/O operation. For more information on buffer pools, see Chapter 27, Tuning DB2 buffer, EDM, RID, and sort pools, on page 671.
738
Administration Guide
3. Thread termination When the thread is terminated, the accounting record is written. It does not report transaction activity that takes place before the thread is created. If RELEASE(DEALLOCATE) is used to release table space locks, the DBD use count is decreased, and the thread storage is released.
739
740
Administration Guide
741
| | | | |
recommended value INACTIVE for the DDF THREADS field of installation panel DSNTIPR. Figure 79 illustrates the relationship among the number of active threads in the system and the total number of connections.
Up to 150 000: Maximum remote connections
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Figure 79. Relationship between active threads and maximum number of connections.
When the conditions listed in Table 121 are true, the thread can be pooled when a COMMIT is issued. After a ROLLBACK, a thread can be pooled even if it had open cursors defined WITH HOLD or a held LOB locator because ROLLBACK
742
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
closes all cursors and LOB locators. ROLLBACK is also the only way to use the KEEPDYNAMIC(YES) bind option to clear information about a dynamic SQL statement.
743
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Recommendation: Use the default option with the option ACTIVE for the DDF THREADS field on DSNTIPR. If you specify a timeout interval with ACTIVE, an application would have to start its next unit of work within the timeout period specification, or risk being canceled. TCP/IP keep_alive interval for the DB2 subsystem: For TCP/IP connections, it is a good idea to specify the IDLE THREAD TIMEOUT value in conjunction with a TCP/IP keep_alive interval of 5 minutes or less to make sure that resources are not locked for a long time when a network outage occurs. It is recommended that you override the TCP/IP stack keep_alive interval on a single DB2 subsystem by specifying a numeric value in the field TCP/IP KEEPALIVE on installation panel DSNTIPS.
744
Administration Guide
| | | | | |
Table 122. Requirements for inactive threads (continued) If there is... A declared temporary table that is active (the table was not explicitly dropped through the DROP TABLE statement) Thread can be inactive? No
745
Important: If you do not classify your DDF transactions into service classes, they are assigned to the default class, the discretionary class, which is at a very low priority. Classification attributes: Each of the WLM classification attributes has a two or three character abbreviation that you can use when entering the attribute on the WLM menus. The following WLM classification attributes pertain to DB2 DDF threads: | | | | AI Accounting information. The value of the DB2 accounting string associated with the DDF server thread, described by QMDAAINF in the DSNDQMDA mapping macro. WLM imposes a maximum length of 143 bytes for accounting information. The DB2 correlation ID of the DDF server thread, described by QWHCCV in the DSNDQWHC mapping macro. The DB2 collection name of the first SQL package accessed by the DRDA requester in the unit of work. The VTAM LUNAME of the system that issued the SQL request. The VTAM NETID of the system that issued the SQL request. Process name. This attribute can be used to classify the application name or the transaction name. The value is defined by QWHCEUTX in the DSNDQWHC mapping macro. The name of the first DB2 package accessed by the DRDA requester in the unit of work. The DB2 plan name associated with the DDF server thread. For DB2 private protocol requesters and DB2 DRDA requesters that are at Version 3 or subsequent releases, this is the DB2 plan name of the requesting application. For other DRDA requesters, use DISTSERV for PN. Stored procedure name. This classification only applies if the first SQL statement from the client is a CALL statement. Subsystem instance. The DB2 servers z/OS subsystem name. Subsystem parameter. This qualifier has a maximum length of 255 bytes. The first 16 bytes contain the clients user ID. The next 18 bytes contain the clients workstation name. The remaining 221 bytes are reserved. Important: If the length of the clients user ID is less than 16 bytes, uses blanks after the user ID to pad the length. If the length of the clients workstation name is less than 18 bytes, uses blanks after the workstation name to pad the length. SSC Subsystem collection name. When the DB2 subsystem is a member of a DB2 data sharing group, this attribute can be used to classify the data sharing group name. The value is defined by QWHADSGN in the DSNDQWHA mapping macro. User ID. The DDF server threads primary authorization ID, after inbound name translation.
CI CN LU NET PC
PK PN
PR SI | | | | | | | SPM
UI
Figure 80 on page 747 shows how you can associate DDF threads with service classes.
746
Administration Guide
Subsystem-Type Xref Notes Options Help -------------------------------------------------------------------------Create Rules for the Subsystem Type Row 1 to 5 of 5 Subsystem Type . . . . . . . . DDF (Required) Description . . . . . . . . . Distributed DB2 Fold qualifier names? . . . . Y (Y or N) Enter one or more action codes: A=After M=Move I=Insert rule IS=Insert Sub-rule -------Qualifier------------Type Name Start B=Before R=Repeat C=Copy D=Delete
Action
____ 1 SI DB2P ___ ____ 2 CN ONLINE ___ ____ 2 PRC PAYPROC ___ ____ 2 UI SYSADM ___ ____ 2 PK QMFOS2 ___ ____ 1 SI DB2T ___ ____ 2 PR PAYPROCT ___ ****************************** BOTTOM
-------Class-------Service Report DEFAULTS: PRDBATCH ________ PRDBATCH ________ PRDONLIN ________ PRDONLIN ________ PRDONLIN ________ PRDQUERY ________ TESTUSER ________ TESTPAYR ________ OF DATA *****************************
Figure 80. Classifying DDF threads using z/OS workload management. You assign performance goals to service classes using the services classes menu of WLM.
In Figure 80, the following classifications are shown: v All DB2P applications accessing their first SQL package in the collection ONLINE are in service class PRDONLIN. v All DB2P applications that call stored procedure PAYPROC first are in service class PRDONLIN. v All work performed by DB2P user SYSADM is in service class PRDONLIN. v Users other than SYSADM that run the DB2P PACKAGE QMFOS2 are in the PRDQUERY class. (The QMFOS2 package is not in collection ONLINE. v All other work on the production system is in service class PRBBATCH. v All users of the test DB2 system are assigned to the TESTUSER class except for work that first calls stored procedure PAYPROCT, which is in service class TESTPAYR.
747
748
Administration Guide
Provide an empty SSM member for regions that will not connect to DB2. v Provide efficient thread reuse for high volume transactions. Thread creation and termination is a significant cost in IMS transactions. IMS transactions identified as wait for input (WFI) can reuse threads: they create a thread at the first execution of an SQL statement and reuse it until the region is terminated. In general, though, use WFI only for transactions that reach a region utilization of at least 75%. Some degree of thread reuse can also be achieved with IMS class scheduling, queuing, and a PROCLIM count greater than one. IMS Fast Path (IFP) dependent regions always reuse the DB2 thread.
Because DB2 must be stopped to set new values, consider setting a higher MAX BATCH CONNECT for batch periods. The statistics record (IFCID 0001) provides information on the create thread queue. The OMEGAMON statistics report (in Figure 81 on page 750) shows that information under the SUBSYSTEM SERVICES section. For TSO or batch environments, having 1% of the requests queued is probably a good number to aim for by adjusting the MAX USERS value of installation panel DSNTIPE. Queuing at create thread time is not desirable in the CICS and IMS environments. If you are running IMS or CICS in the same DB2 subsystem as TSO and batch, use MAX BATCH CONNECT and MAX TSO CONNECT to limit the number of threads taken by the TSO and batch environments. The goal is to allow enough threads for CICS and IMS so that their threads do not queue. To determine the number of allied threads queued, see the QUEUED AT CREATE THREAD field ( A ) of the OMEGAMON statistics report.
749
SUBSYSTEM SERVICES QUANTITY --------------------------- -------IDENTIFY 30757.00 CREATE THREAD 30889.00 SIGNON 0.00 TERMINATE 61661.00 ROLLBACK 644.00 COMMIT PHASE 1 COMMIT PHASE 2 READ ONLY COMMIT UNITS OF RECOVERY INDOUBT UNITS OF REC.INDBT RESOLVED SYNCHS(SINGLE PHASE COMMIT) QUEUED AT CREATE THREAD A SUBSYSTEM ALLIED MEMORY EOT SUBSYSTEM ALLIED MEMORY EOM SYSTEM EVENT CHECKPOINT 0.00 0.00 0.00 0.00 0.00 30265.00 0.00 1.00 0.00 0.00
750
Administration Guide
751
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Declared lengths of host variables: For string comparisons other than equal comparisons, ensure that the declared length of a host variable is less than or equal to the length attribute of the table column that it is compared to. For languages in which character strings are nul-terminated, the string length can be less than or equal to the column length plus 1. If the declared length of the host variable is greater than the column length, the predicate is stage 1 but cannot be a matching predicate for an index scan. For example, assume that a host variable and an SQL column are defined as follows:
C language declaration char string_hv[15] SQL definition STRING_COL CHAR(12)
A predicate such as WHERE STRING_COL > :string_hv is not a matching predicate for an index scan because the length of string_hv is greater than the length of STRING_COL. One way to avoid an inefficient predicate using character host variables is to declare the host variable with a length that is less than or equal to the column length:
char string_hv[12]
Because this is a C language example, the host variable length could be 1 byte greater than the column length:
char string_hv[13]
For numeric comparisons, a comparison between a DECIMAL column and a float or real host variable is stage 2 if the precision of the DECIMAL column is greater than 15. For example, assume that a host variable and an SQL column are defined as follows:
C language declaration float float_hv SQL definition DECIMAL_COL DECIMAL(16,2)
A predicate such as WHERE DECIMAL_COL = :float_hv is not a matching predicate for an index scan because the length of DECIMAL_COL is greater than 15. However, if DECIMAL_COL is defined as DECIMAL(15,2), the predicate is stage 1 and indexable.
752
Administration Guide
Assuming that subquery 1 and subquery 2 are the same type of subquery (either correlated or noncorrelated) and the subqueries are stage 2, DB2 evaluates the subquery predicates in the order they appear in the WHERE clause. Subquery 1 rejects 10% of the total rows, and subquery 2 rejects 80% of the total rows. The predicate in subquery 1 (which is referred to as P1) is evaluated 1000 times, and the predicate in subquery 2 (which is referred to as P2) is evaluated 900 times, for a total of 1900 predicate checks. However, if the order of the subquery predicates is reversed, P2 is evaluated 1000 times, but P1 is evaluated only 200 times, for a total of 1200 predicate checks. Coding P2 before P1 appears to be more efficient if P1 and P2 take an equal amount of time to execute. However, if P1 is 100 times faster to evaluate than P2, then coding subquery 1 first might be advisable. If you notice a performance degradation, consider reordering the subqueries and monitoring the results. Consult Writing efficient subqueries on page 785 to help you understand what factors make one subquery run more slowly than another. If you are unsure, run EXPLAIN on the query with both a correlated and a noncorrelated subquery. By examining the EXPLAIN output and understanding your data distribution and SQL statements, you should be able to determine which form is more efficient. This general principle can apply to all types of predicates. However, because subquery predicates can potentially be thousands of times more processor- and I/O-intensive than all other predicates, the order of subquery predicates is particularly important. | | | Regardless of coding order, DB2 performs noncorrelated subquery predicates before correlated subquery predicates, unless the subquery is transformed into a join. Refer to DB2 predicate manipulation on page 775 to see in what order DB2 will evaluate predicates and when you can control the evaluation order.
753
v The aggregate function is not one of the following aggregate functions: STDDEV STDDEV_SAMP VAR VAR_SAMP If your query involves the functions MAX or MIN, refer to One-fetch access (ACCESSTYPE=I1) on page 957 to see whether your query could take advantage of that method.
If you rewrite the predicate in the following way, DB2 can evaluate it more efficiently:
WHERE SALARY > 50000/(1 + :hv1)
In the second form, the column is by itself on one side of the operator, and all the other values are on the other side of the operator. The expression on the right is called a noncolumn expression. DB2 can evaluate many predicates with noncolumn expressions at an earlier stage of processing called stage 1, so the queries take less time to run. For more information on noncolumn expressions and stage 1 processing, see Properties of predicates on page 755. | | | | |
754
Administration Guide
| | | | | | | | | | | | | | |
Materialized query tables are user-created tables. Depending on how the tables are defined, they are user-maintained or system-maintained. If you have set subsystem parameters or an application sets special registers to tell DB2 to use materialized query tables, when DB2 executes a dynamic query, DB2 uses the contents of applicable materialized query tables if DB2 finds a performance advantage to doing so. For information about materialized query tables, see Chapter 32, Using materialized query tables, on page 885.
Effect on access paths: This section explains the effect of predicates on access paths. Because SQL allows you to express the same query in different ways, knowing how predicates affect path selection helps you write queries that access data efficiently. This section describes: v Properties of predicates v General rules about predicate evaluation on page 759 v Predicate filter factors on page 766 v DB2 predicate manipulation on page 775 v Column correlation on page 772
Properties of predicates
Predicates in a HAVING clause are not used when selecting access paths; hence, in this section the term predicate means a predicate after WHERE or ON. A predicate influences the selection of an access path because of: v Its type, as described in Predicate types on page 756 v Whether it is indexable, as described in Indexable and nonindexable predicates on page 757 v Whether it is stage 1 or stage 2
Chapter 30. Tuning your queries
755
v Whether it contains a ROWID column, as described in Is direct row access possible? (PRIMARY_ACCESSTYPE = D) on page 946 There are special considerations for Predicates in the ON clause on page 758. Predicate definitions: Predicates are identified as: Simple or compound A compound predicate is the result of two predicates, whether simple or compound, connected together by AND or OR Boolean operators. All others are simple. Local or join Local predicates reference only one table. They are local to the table and restrict the number of rows returned for that table. Join predicates involve more than one table or correlated reference. They determine the way rows are joined from two or more tables. For examples of their use, see Interpreting access to two or more tables (join) on page 959. Boolean term Any predicate that is not contained by a compound OR predicate structure is a Boolean term. If a Boolean term is evaluated false for a particular row, the whole WHERE clause is evaluated false for that row.
Predicate types
The type of a predicate depends on its operator or syntax. The type determines what type of processing and filtering occurs when the predicate is evaluated. Table 123 shows the different predicate types.
Table 123. Definitions and examples of predicate types Type Subquery Equal Definition Any predicate that includes another SELECT statement. Any predicate that is not a subquery predicate and has an equal operator and no NOT operator. Also included are predicates of the form C1 IS NULL and C IS NOT DISTINCT FROM. Any predicate that is not a subquery predicate and has an operator in the following list: >, >=, <, <=, LIKE, or BETWEEN. A predicate of the form column IN (list of values). Any predicate that is not a subquery predicate and contains a NOT operator. Also included are predicates of the form C1 IS DISTINCT FROM. Example C1 IN (SELECT C10 FROM TABLE1) C1=100
Range
C1>100
IN-list NOT
Example: Influence of type on access paths: The following two examples show how the predicate type can influence DB2s choice of an access path. In each one, assume that a unique index I1 (C1) exists on table T1 (C1, C2), and that all values of C1 are positive integers. The following query has a range predicate:
SELECT C1, C2 FROM T1 WHERE C1 >= 0;
756
Administration Guide
However, the predicate does not eliminate any rows of T1. Therefore, it could be determined during bind that a table space scan is more efficient than the index scan. The following query has an equal predicate:
SELECT * FROM T1 WHERE C1 = 0;
DB2 chooses the index access in this case because the index is highly selective on column C1.
Recommendation: To make your queries as efficient as possible, use indexable predicates in your queries and create suitable indexes on your tables. Indexable predicates allow the possible use of a matching index scan, which is often a very efficient access path.
| | | | | | | | | | | |
The predicate is not indexable because the length of the column is shorter than the length of the constant. Example: The following predicate is not stage 1:
DECCOL>34.5, where DECCOL is defined as DECIMAL(18,2)
The predicate is not stage 1 because the precision of the decimal column is greater than 15.
757
| | | | | | | | | | | |
v Whether DB2 evaluates the predicate before or after a join operation. A predicate that is evaluated after a join operation is always a stage 2 predicate. v Join sequence The same predicate might be stage 1 or stage 2, depending on the join sequence. Join sequence is the order in which DB2 joins tables when it evaluates a query. The join sequence is not necessarily the same as the order in which the tables appear in the predicate. Example: This predicate might be stage 1 or stage 2:
T1.C1=T2.C1+1
If T2 is the first table in the join sequence, the predicate is stage 1, but if T1 is the first table in the join sequence, the predicate is stage 2. You can determine the join sequence by executing EXPLAIN on the query and examining the resulting plan table. See Chapter 34, Using EXPLAIN to improve SQL performance, on page 931 for details. All indexable predicates are stage 1. The predicate C1 LIKE %BC is stage 1, but is not indexable. Recommendation: Use stage 1 predicates whenever possible.
v v v v
P1 is a simple BT predicate. P2 and P3 are simple non-BT predicates. P2 OR P3 is a compound BT predicate. P1 AND (P2 OR P3) is a compound BT predicate.
Effect on access paths: In single-index processing, only Boolean term predicates are chosen for matching predicates. Hence, only indexable Boolean term predicates are candidates for matching index scans. To match index columns by predicates that are not Boolean terms, DB2 considers multiple-index access. In join operations, Boolean term predicates can reject rows at an earlier stage than can non-Boolean term predicates. Recommendation: For join operations, choose Boolean term predicates over non-Boolean term predicates whenever possible.
758
Administration Guide
For full outer join, the ON clause is evaluated during the join operation like a stage 2 predicate. In an outer join, predicates that are evaluated after the join are stage 2 predicates. Predicates in a table expression can be evaluated before the join and can therefore be stage 1 predicates. Example: In the following statement, the predicate EDLEVEL > 100 is evaluated before the full join and is a stage 1 predicate:
SELECT * FROM (SELECT * FROM DSN8810.EMP WHERE EDLEVEL > 100) AS X FULL JOIN DSN8810.DEPT ON X.WORKDEPT = DSN8810.DEPT.DEPTNO;
For more information about join methods, see Interpreting access to two or more tables (join) on page 959.
759
| |
1. All equal predicates (including column IN list, where list has only one element, or column BETWEEN value1 AND value1) are evaluated. 2. All range predicates and predicates of the form column IS NOT NULL are evaluated. 3. All other predicate types are evaluated. After both sets of rules are applied, predicates are evaluated in the order in which they appear in the query. Because you specify that order, you have some control over the order of evaluation.
| |
Exception: Regardless of coding order, non-correlated subqueries are evaluated before correlated subqueries, unless DB2 transforms the subquery into a join.
| |
v Tn col expr is an expression that contains a column in table Tn. The expression might be only that column. v predicate is a predicate of any type. In general, if you form a compound predicate by combining several simple predicates with OR operators, the result of the operation has the same characteristics as the simple predicate that is evaluated latest. For example, if two indexable predicates are combined with an OR operator, the result is indexable. If a stage 1 predicate and a stage 2 predicate are combined with an OR operator, the result is stage 2.
Table 124. Predicate types and processing Predicate Type COL = value Indexable? Y Y Y Stage 1? Y Y Y Notes 16 9, 11, 12, 15 20, 21
| #
760
Administration Guide
Table 124. Predicate types and processing (continued) Predicate Type COL op value Indexable? Y Y Y Y N N Y Y Y N N Y N N N N N N N Y Y Y N N N N Y N N Y Y Y Stage 1? Y Y Y Y N N Y Y Y Y Y Y Y Y N Y Y Y Y Y Y Y Y N N N Y N N Y Y Y 26 on page 765 22 22 5 1, 5 1, 5 2, 5 6, 9, 11, 12, 14, 15, 25 6, 9, 11, 12, 13, 14, 15 8, 11 3, 25 3 3 10 6, 7, 11, 12, 13, 14 5 17, 18 8, 11 8, 11 21 Notes 13 9, 11, 12, 13 13 9, 11, 12, 13, 15,23
# | # |
COL BETWEEN noncol expr1 AND noncol expr2 value BETWEEN COL1 AND COL2 COL BETWEEN COL1 AND COL2
| | | | | # |
COL BETWEEN expression1 AND expression2 COL LIKE 'pattern' COL IN (list) COL <> value COL <> noncol expr COL IS NOT NULL COL NOT BETWEEN value1 AND value2 COL NOT BETWEEN noncol expr1 AND noncol expr2 value NOT BETWEEN COL1 AND COL2 COL NOT IN (list) COL NOT LIKE ' char' COL LIKE '%char' COL LIKE '_char' COL LIKE host variable
| | |
T1.COL = T2 col expr T1.COL op T2 col expr T1.COL <> T2 col expr T1.COL1 = T1.COL2 T1.COL1 op T1.COL2 T1.COL1 <> T1.COL2 COL=(noncor subq)
COL = ANY (noncor subq) COL = ALL (noncor subq) COL op (noncor subq)
761
Table 124. Predicate types and processing (continued) Predicate Type COL <> (noncor subq) Indexable? N N N Y Y N N N N N N N N N N N N N N N N Y N Y N N N Y N Y N N N N N N N Stage 1? Y N N Y Y N N N N N N N N N N N N N N N Y Y Y Y N N Y Y Y Y N N N N N N N 4 19 8, 11 16 8, 11 9, 11, 12, 15 3 3 8, 11 6, 9, 11, 12, 14, 15 19 4 22 4 22 4 22 24 22 Notes
# #
COL <> ANY (noncor subq) COL <> ALL (noncor subq) COL IN (noncor subq) (COL1,...COLn) IN (noncor subq) COL NOT IN (noncor subq) (COL1,...COLn) NOT IN (noncor subq) COL = (cor subq)
COL = ANY (cor subq) COL = ALL (cor subq) COL op (cor subq)
COL op ANY (cor subq) COL op ALL (cor subq) COL <> (cor subq)
COL <> ANY (cor subq) COL <> ALL (cor subq) COL IN (cor subq) (COL1,...COLn) IN (cor subq) COL NOT IN (cor subq) (COL1,...COLn) NOT IN (cor subq)
| | | | | | | | | | | |
COL IS DISTINCT FROM value COL IS NOT DISTINCT FROM value COL IS DISTINCT FROM noncol expr COL IS NOT DISTINCT FROM noncol expr T1.COL1 IS DISTINCT FROM T2.COL2 T1.COL1 IS NOT DISTINCT FROM T2.COL2 T1.COL1 IS DISTINCT FROM T2 col expr T1.COL1 IS NOT DISTINCT FROM T2 col expr COL IS DISTINCT FROM (noncor subq) COL IS NOT DISTINCT FROM (noncor subq) COL IS NOT DISTINCT FROM (cor subq) EXISTS (subq) NOT EXISTS (subq) expression = value expression <> value expression op value expression op (subq)
762
Administration Guide
Notes to Table 124 on page 760: 1. Indexable only if an ESCAPE character is specified and used in the LIKE predicate. For example, COL LIKE '+%char' ESCAPE '+' is indexable. 2. Indexable only if the pattern in the host variable is an indexable constant (for example, host variable='char%'). 3. If both COL1 and COL2 are from the same table, access through an index on either one is not considered for these predicates. However, the following query is an exception:
SELECT * FROM T1 A, T1 B WHERE A.C1 = B.C2;
By using correlation names, the query treats one table as if it were two separate tables. Therefore, indexes on columns C1 and C2 are considered for access. 4. If the subquery has already been evaluated for a given correlation value, then the subquery might not have to be reevaluated. 5. Not indexable or stage 1 if a field procedure exists on that column. 6. The column on the left side of the join sequence must be in a different table from any columns on the right side of the join sequence. 7. The tables that contain the columns in expression1 or expression2 must already have been accessed. 8. The processing for WHERE NOT COL = value is like that for WHERE COL <> value, and so on. 9. If noncol expr, noncol expr1, or noncol expr2 is a noncolumn expression of one of these forms, then the predicate is not indexable: v noncol expr + 0 v noncol expr - 0 v noncol expr * 1 v noncol expr / 1 v noncol expr CONCAT empty string | | # # # # # # # # # # # # # # # # # # # 10. COL, COL1, and COL2 can be the same column or different columns. The columns are in the same table. 11. Any of the following sets of conditions make the predicate stage 2: v The first value obtained before the predicate is evaluated is DECIMAL(p,s), where p>15, and the second value obtained before the predicate is evaluated is REAL or FLOAT. v The first value obtained before the predicate is evaluated is CHAR, VARCHAR, GRAPHIC, or VARGRAPHIC, and the second value obtained before the predicate is evaluated is DATE, TIME, or TIMESTAMP. 12. The predicate is stage 1 but not indexable if the first value obtained before the predicate is evaluated is CHAR or VARCHAR, the second value obtained before the predicate is evaluated is GRAPHIC or VARGRAPHIC, and the first value obtained before the predicate is evaluated is not Unicode mixed. 13. If both sides of the comparison are strings, any of the following sets of conditions makes the predicate stage 1 but not indexable: v The first value obtained before the predicate is evaluated is CHAR or VARCHAR, and the second value obtained before the predicate is evaluated is GRAPHIC or VARGRAPHIC. v Both of the following conditions are true: Both sides of the comparison are CHAR or VARCHAR, or both sides of the comparison are BINARY or VARBINARY
763
The length the first value obtained before the predicate is evaluated is less than the length of the second value obtained before the predicate is evaluated. v Both of the following conditions are true: Both sides of the comparison are GRAPHIC or VARGRAPHIC. The length of the first value obtained before the predicate is evaluated is less than the length of the second value obtained before the predicate is evaluated. v Both of the following conditions are true: The first value obtained before the predicate is evaluated is GRAPHIC or VARGRAPHIC, and the second value obtained before the predicate is evaluated is CHAR or VARCHAR. The length of the first value obtained before the predicate is evaluated is less than the length of the second value obtained before the predicate is evaluated. 14. If both sides of the comparison are strings, but the two sides have different CCSIDs, the predicate is stage 1 and indexable only if the first value obtained before the predicate is evaluated is Unicode and the comparison does not meet any of the conditions in note 13 on page 763. 15. Under either of these circumstances, the predicate is stage 2: v noncol expr is a case expression. v All of the following conditions are true: noncol expr is the product or the quotient of two noncolumn expressions noncol expr is an integer value COL is a FLOAT or a DECIMAL column If COL has the ROWID data type, DB2 tries to use direct row access instead of index access or a table space scan. If COL has the ROWID data type, and an index is defined on COL, DB2 tries to use direct row access instead of index access. IN-list predicates are indexable and stage 1 if the following conditions are true: v The IN list contains only simple items. For example, constants, host variables, parameter markers, and special registers. v The IN list does not contain any aggregate functions or scalar functions. v The IN list is not contained in a triggers WHEN clause. v For numeric predicates where the left side column is DECIMAL with precision greater than 15, none of the items in the IN list are FLOAT. v For string predicates, the coded character set identifier is the same as the identifier for the left side column. v For DATE, TIME, and TIMESTAMP predicates, the left side column must be DATE, TIME, or TIMESTAMP. COL IN (corr subq) and EXISTS (corr subq) predicates might become indexable and stage 1 if they are transformed to a join during processing. The predicate types COL IS NULL and COL IS NOT NULL are stage 2 predicates when they query a column that is defined as NOT NULL.
19. 20.
21. If the predicate type is COL IS NULL and the column is defined as NOT NULL, the table is not accessed because C1 cannot be NULL. 22. The ANY and SOME keywords behave similarly. If a predicate with the ANY keyword is not indexable and not stage 1, a similar predicate with the SOME keyword is not indexable and not stage 1.
764
Administration Guide
# # # # # # # #
23. Under either of these circumstances, the predicate is stage 2: v noncol expr is a case expression. v noncol expr is the product or the quotient of two noncolumn expressions, that product or quotient is an integer value, and COL is a FLOAT or a DECIMAL column. 24. COL IN (noncor subq) is stage 1 for type N access only. Otherwise, it is stage 2. 25. If the inner table is as EBCDIC or ASCII column and the outer table is a Unicode column, the predicate is stage 1 and indexable. 26. This type of predicate is not stage 1 when a nullability mismatch is possible.
| |
| |
765
The third predicate is stage 2. The compound predicate is stage 2 and all three predicates are evaluated at stage 2. The simple predicates are not Boolean terms and the compound predicate is not indexable. v WHERE C1=5 OR (C2=7 AND C3=C4) The third predicate is stage 2. The two compound predicates (C2=7 AND C3=C4) and (C1=5 OR (C2=7 AND C3=C4)) are stage 2. All predicates are evaluated at stage 2. v WHERE (C1>5 OR C2=7) AND C3 = C4 The compound predicate (C1>5 OR C2=7) is indexable and stage 1. The simple predicate C3=C4 is not stage1; so the index is not considered for matching-index access. Rows that satisfy the compound predicate (C1>5 OR C2=7) are passed to stage 2 for evaluation of the predicate C3=C4.
766
Administration Guide
Example: The default filter factor for the predicate C1 = D is 1/25 (0.04). If D is actually not close to 0.04, the default probably does not lead to an optimal access path.
Table 125. DB2 default filter factors by predicate type Predicate Type Col = literal Col <> literal Col IS NULL Filter Factor 1/25 1 (1/25) 1/25 1/25 1 (1/25) (number of literals)/25 1/3 1/10 1/10
| |
Col IS NOT DISTINCT FROM Col IS DISTINCT FROM Col IN (literal list) Col Op literal Col LIKE literal Col BETWEEN literal1 and literal2
Note: Op is one of these operators: <, <=, >, >=. Literal is any constant value that is known at bind time.
| |
Col IS NOT DISTINCT FROM Col IS DISTINCT FROM Col IN (literal list) Col Op1 literal Col Op2 literal Col LIKE literal Col BETWEEN literal1 and literal2
Note: Op1 is < or <=, and the literal is not a host variable. Op2 is > or >=, and the literal is not a host variable. Literal is any constant value that is known at bind time.
767
Filter factors for other predicate types: The examples selected in Table 125 on page 767 and Table 126 on page 767 represent only the most common types of predicates. If P1 is a predicate and F is its filter factor, then the filter factor of the predicate NOT P1 is (1 - F). But, filter factor calculation is dependent on many things, so a specific filter factor cannot be given for all predicate types.
Interpolation formulas
Definition: For a predicate that uses a range of values, DB2 calculates the filter factor by an interpolation formula. The formula is based on an estimate of the ratio of the number of values in the range to the number of values in the entire column of the table. The formulas: The formulas that follow are rough estimates, subject to further modification by DB2. They apply to a predicate of the form col op. literal. The value of (Total Entries) in each formula is estimated from the values in columns HIGH2KEY and LOW2KEY in catalog table SYSIBM.SYSCOLUMNS for column col: Total Entries = (HIGH2KEY value - LOW2KEY value). v For the operators < and <=, where the literal is not a host variable: (Literal value - LOW2KEY value) / (Total Entries) v For the operators > and >=, where the literal is not a host variable: (HIGH2KEY value - Literal value) / (Total Entries) v For LIKE or BETWEEN: (High literal value - Low literal value) / (Total Entries) Example: For column C2 in a predicate, suppose that the value of HIGH2KEY is 1400 and the value of LOW2KEY is 200. For C2, DB2 calculates (Total Entries) = 1200. For the predicate C1 BETWEEN 800 AND 1100, DB2 calculates the filter factor F as:
F = (1100 - 800)/1200 = 1/4 = 0.25
Interpolation for LIKE: DB2 treats a LIKE predicate as a type of BETWEEN predicate. Two values that bound the range qualified by the predicate are generated from the literal string in the predicate. Only the leading characters found before the escape character (% or _) are used to generate the bounds. So if the escape character is the first character of the string, the filter factor is estimated as 1, and the predicate is estimated to reject no rows. Defaults for interpolation: DB2 might not interpolate in some cases; instead, it can use a default filter factor. Defaults for interpolation are: v Relevant only for ranges, including LIKE and BETWEEN predicates v Used only when interpolation is not adequate v Based on the value of COLCARDF v Used whether uniform or additional distribution statistics exist on the column if either of the following conditions is met: The predicate does not contain constants COLCARDF < 4. Table 127 on page 769 shows interpolation defaults for the operators <, <=, >, >= and for LIKE and BETWEEN.
768
Administration Guide
Table 127. Default filter factors for interpolation COLCARDF >=100000000 >=10000000 >=1000000 >=100000 >=10000 >=1000 >=100 Factor for Op 1/10,000 1/3,000 1/1,000 1/300 1/100 1/30 1/10 1/3 1/1 1/3 Factor for LIKE or BETWEEN 3/100000 1/10000 3/10000 1/1000 3/1000 1/100 3/100 1/10 1/1 1/10
| | |
>=2 =1 <=0
| | |
|
Frequency Concatenated
769
Table 128. Predicates for which distribution statistics are used (continued) Type of statistic Cardinality Single column or concatenated columns Single Predicates COL=literal COL IS NULL COL IN (literal-list) COL op literal COL BETWEEN literal AND literal COL=host-variable COL1=COL2 T1.COL=T2.COL COL IS NOT DISTINCT FROM COL=literal COL=:host-variable COL1=COL2 COL IS NOT DISTINCT FROM
|
Cardinality Concatenated
|
Note: op is one of these operators: <, <=, >, >=.
How they are used: Columns COLVALUE and FREQUENCYF in table SYSCOLDIST contain distribution statistics. Regardless of the number of values in those columns, running RUNSTATS deletes the existing values and inserts rows for frequent values. | | | | | | | | | | | | | | | | You can run RUNSTATS without the FREQVAL option, with the FREQVAL option in the correl-spec, with the FREQVAL option in the colgroup-spec, or in both, with the following effects: v If you run RUNSTATS without the FREQVAL option, RUNSTATS inserts rows for the 10 most frequent values for the first column of the specified index. v If you run RUNSTATS with the FREQVAL option in the correl-spec, RUNSTATS inserts rows for concatenated columns of an index. The NUMCOLS option specifies the number of concatenated index columns. The COUNT option specifies the number of frequent values. You can collect most-frequent values, least-frequent values, or both. v If you run RUNSTATS with the FREQVAL option in the colgroup-spec, RUNSTATS inserts rows for the columns in the column group that you specify. The COUNT option specifies the number of frequent values. You can collect most-frequent values, least-frequent values, or both. v If you specify the FREQVAL option, RUNSTATS inserts rows for columns of the specified index and for columns in a column group. See Part 2 of DB2 Utility Guide and Reference for more information about RUNSTATS. DB2 uses the frequencies in column FREQUENCYF for predicates that use the values in column COLVALUE and assumes that the remaining data are uniformly distributed. Example: Filter factor for a single column Suppose that the predicate is C1 IN (3,5) and that SYSCOLDIST contains these values for column C1:
COLVALUE '3' '5' '8' FREQUENCYF .0153 .0859 .0627
770
Administration Guide
The filter factor is .0153 + .0859 = .1012. Example: Filter factor for correlated columns | Suppose that columns C1 and C2 are correlated. Suppose also that the predicate is C1=3 AND C2=5 and that SYSCOLDIST contains these values for columns C1 and C2:
COLVALUE '1' '1' '2' '2' '3' '3' '3' '5' '4' '4' '5' '3' '5' '5' '6' '6' FREQUENCYF .1176 .0588 .0588 .1176 .0588 .1764 .3529 .0588
Table T1 consists of columns C1, C2, C3, and C4. Index I1 is defined on table T1 and contains columns C1, C2, and C3. Suppose that the simple predicates in the compound predicate have the following characteristics: C1='A' C3='B' C4='C' Matching predicate Screening predicate Stage 1, nonindexable predicate
To determine the cost of accessing table T1 through index I1, DB2 performs these steps: 1. Estimates the matching index cost. DB2 determines the index matching filter factor by using single-column cardinality and single-column frequency statistics because only one column can be a matching column. 2. Estimates the total index filtering. This includes matching and screening filtering. If statistics exist on column group (C1,C3), DB2 uses those statistics. Otherwise DB2 uses the available single-column statistics for each of these columns. DB2 will also use FULLKEYCARDF as a bound. Therefore, it can be critical to have column group statistics on column group (C1, C3) to get an accurate estimate. 3. Estimates the table-level filtering. If statistics are available on column group (C1,C3,C4), DB2 uses them. Otherwise, DB2 uses statistics that exist on subsets of those columns. Important: If you supply appropriate statistics at each level of filtering, DB2 is more likely to choose the most efficient access path.
771
Column correlation
Two columns of data, A and B of a single table, are correlated if the values in column A do not vary independently of the values in column B. Example: Table 129 is an excerpt from a large single table. Columns CITY and STATE are highly correlated, and columns DEPTNO and SEX are entirely independent.
Table 129. Data from the CREWINFO table CITY Fresno Fresno Fresno Fresno New York New York Miami Miami Los Angeles Los Angeles STATE CA CA CA CA NY NY FL FL CA CA DEPTNO A345 J123 J123 J123 J123 A345 B499 A345 X987 A345 SEX F M F F M M M F M M EMPNO 27375 12345 93875 52325 19823 15522 83825 35785 12131 38251 ZIPCODE 93650 93710 93650 93792 09001 09530 33116 34099 90077 90091
In this simple example, for every value of column CITY that equals 'FRESNO', there is the same value in column STATE ('CA').
The result of the count of each distinct column is the value of COLCARDF in the DB2 catalog table SYSCOLUMNS. Multiply the previous two values together to get a preliminary result: |
CITYCOUNT x STATECOUNT = ANSWER1
(ANSWER2)
Compare the result of the previous count (ANSWER2) with ANSWER1. If ANSWER2 is less than ANSWER1, then the suspected columns are correlated.
772
Administration Guide
the estimated cost of operations cheaper than they actually are. Column correlation affects both single table queries and join queries. Column correlation on the best matching columns of an index: The following query selects rows with females in department A345 from Fresno, California. Two indexes are defined on the table, Index 1 (CITY,STATE,ZIPCODE) and Index 2 (DEPTNO,SEX).
Query 1 SELECT ... FROM CREWINFO WHERE CITY = 'FRESNO' AND STATE = 'CA' AND DEPTNO = 'A345' AND SEX = 'F'; (PREDICATE1) (PREDICATE2)
Consider the two compound predicates (labeled PREDICATE1 and PREDICATE2), their actual filtering effects (the proportion of rows they select), and their DB2 filter factors. Unless the proper catalog statistics are gathered, the filter factors are calculated as if the columns of the predicate are entirely independent (not correlated). When the columns in a predicate correlate but the correlation is not reflected in catalog statistics, the actual filtering effect to be significantly different from the DB2 filter factor. Table 130 shows how the actual filtering effect and the DB2 filter factor can differ, and how that difference can affect index choice and performance.
Table 130. Effects of column correlation on matching columns INDEX 1 Matching predicates Matching columns DB2 estimate for matching columns (Filter Factor) Predicate1 CITY=FRESNO AND STATE=CA 2 column=CITY, COLCARDF=4 Filter Factor=1/4 column=STATE, COLCARDF=3 Filter Factor=1/3 1/4 1/3 = 0.083 0.083 10 = 0.83 INDEX CHOSEN (.8 < 1.25) 4/10 INDEX 2 Predicate2 DEPTNO=A345 AND SEX=F 2 column=DEPTNO, COLCARDF=4 Filter Factor=1/4 column=SEX, COLCARDF=2 Filter Factor=1/2 1/4 1/2 = 0.125 0.125 10 = 1.25 2/10 2/10 10 = 2 BETTER INDEX CHOICE (2 < 4)
Compound filter factor for matching columns Qualified leaf pages based on DB2 estimations Actual filter factor based on data distribution
DB2 chooses an index that returns the fewest rows, partly determined by the smallest filter factor of the matching columns. Assume that filter factor is the only influence on the access path. The combined filtering of columns CITY and STATE seems very good, whereas the matching columns for the second index do not seem to filter as much. Based on those calculations, DB2 chooses Index 1 as an access path for Query 1.
773
The problem is that the filtering of columns CITY and STATE should not look good. Column STATE does almost no filtering. Since columns DEPTNO and SEX do a better job of filtering out rows, DB2 should favor Index 2 over Index 1. Column correlation on index screening columns of an index: Correlation might also occur on nonmatching index columns, used for index screening. See Nonmatching index scan (ACCESSTYPE=I and MATCHCOLS=0) on page 955 for more information. Index screening predicates help reduce the number of data rows that qualify while scanning the index. However, if the index screening predicates are correlated, they do not filter as many data rows as their filter factors suggest. To illustrate this, use Query 1 on page 773 with the following indexes on Table 129 on page 772:
Index 3 (EMPNO,CITY,STATE) Index 4 (EMPNO,DEPTNO,SEX)
In the case of Index 3, because the columns CITY and STATE of Predicate 1 are correlated, the index access is not improved as much as estimated by the screening predicates and therefore Index 4 might be a better choice. (Note that index screening also occurs for indexes with matching columns greater than zero.) Multiple table joins: In Query 2, Table 131 is added to the original query (see Query 1 on page 773) to show the impact of column correlation on join queries.
Table 131. Data from the DEPTINFO table CITY Fresno Los Angeles STATE CA CA MANAGER Smith Jones DEPT J123 A345 DEPTNAME ADMIN LEGAL
Query 2 SELECT ... FROM CREWINFO T1,DEPTINFO T2 WHERE T1.CITY = 'FRESNO' AND T1.STATE='CA' AND T1.DEPTNO = T2.DEPT AND T2.DEPTNAME = 'LEGAL';
(PREDICATE 1)
The order that tables are accessed in a join statement affects performance. The estimated combined filtering of Predicate1 is lower than its actual filtering. So table CREWINFO might look better as the first table accessed than it should. Also, due to the smaller estimated size for table CREWINFO, a nested loop join might be chosen for the join method. But, if many rows are selected from table CREWINFO because Predicate1 does not filter as many rows as estimated, then another join method or join sequence might be better.
| | | | |
774
Administration Guide
The last two techniques are discussed in Special techniques to influence access path selection on page 794. The RUNSTATS utility collects the statistics DB2 needs to make proper choices about queries. With RUNSTATS, you can collect statistics on the concatenated key columns of an index and the number of distinct values for those concatenated columns. This gives DB2 accurate information to calculate the filter factor for the query. Example: RUNSTATS collects statistics that benefit queries like this:
SELECT * FROM T1 WHERE C1 = 'a' AND C2 = 'b' AND C3 = 'c' ;
where: v The first three index keys are used (MATCHCOLS = 3). v An index exists on C1, C2, C3, C4, C5. v Some or all of the columns in the index are correlated in some way. See Using RUNSTATS to keep access path statistics current on page 657 for information on using RUNSTATS to influence access path selection. See Updating catalog statistics on page 804 for information on updating catalog statistics manually.
775
The outer join operation gives you these result table rows: v The rows with matching values of C1 in tables T1 and T2 (the inner join result) v The rows from T1 where C1 has no corresponding value in T2 v The rows from T2 where C1 has no corresponding value in T1 However, when you apply the predicate, you remove all rows in the result table that came from T2 where C1 has no corresponding value in T1. DB2 transforms the full join into a left join, which is more efficient:
SELECT * FROM T1 X LEFT JOIN T2 Y ON X.C1=Y.C1 WHERE X.C2 > 12;
Example: The predicate, X.C2>12, filters out all null values that result from the right join:
SELECT * FROM T1 X RIGHT JOIN T2 Y ON X.C1=Y.C1 WHERE X.C2>12;
Therefore, DB2 can transform the right join into a more efficient inner join without changing the result:
SELECT * FROM T1 X INNER JOIN T2 Y ON X.C1=Y.C1 WHERE X.C2>12;
The predicate that follows a join operation must have the following characteristics before DB2 transforms an outer join into a simpler outer join or into an inner join: v The predicate is a Boolean term predicate. v The predicate is false if one table in the join operation supplies a null value for all of its columns. These predicates are examples of predicates that can cause DB2 to simplify join operations: T1.C1 > 10 T1.C1 IS NOT NULL T1.C1 > 10 OR T1.C2 > 15 T1.C1 > T2.C1 T1.C1 IN (1,2,4) T1.C1 LIKE 'ABC%' T1.C1 BETWEEN 10 AND 100 12 BETWEEN T1.C1 AND 100 Example: This examples shows how DB2 can simplify a join operation because the query contains an ON clause that eliminates rows with unmatched values:
SELECT * FROM T1 X LEFT JOIN T2 Y FULL JOIN T3 Z ON Y.C1=Z.C1 ON X.C1=Y.C1;
Because the last ON clause eliminates any rows from the result table for which column values that come from T1 or T2 are null, DB2 can replace the full join with a more efficient left join to achieve the same result:
SELECT * FROM T1 X LEFT JOIN T2 Y LEFT JOIN T3 Z ON Y.C1=Z.C1 ON X.C1=Y.C1;
776
Administration Guide
In one case, DB2 transforms a full outer join into a left join when you cannot write code to do it. This is the case where a view specifies a full outer join, but a subsequent query on that view requires only a left outer join. Example: Consider this view:
CREATE VIEW V1 (C1,T1C2,T2C2) AS SELECT COALESCE(T1.C1, T2.C1), T1.C2, T2.C2 FROM T1 X FULL JOIN T2 Y ON T1.C1=T2.C1;
This view contains rows for which values of C2 that come from T1 are null. However, if you execute the following query, you eliminate the rows with null values for C2 that come from T1:
SELECT * FROM V1 WHERE T1C2 > 10;
Therefore, for this query, a left join between T1 and T2 would have been adequate. DB2 can execute this query as if the view V1 was generated with a left outer join so that the query runs more efficiently.
777
| | |
v Evaluation of the query with the generated predicate results in different CCSID conversion from evaluation of the query without the predicate. See Chapter 4 of DB2 SQL Reference for information on CCSID conversion. When a predicate meets the transitive closure conditions, DB2 generates a new predicate, whether or not it already exists in the WHERE clause. The generated predicates have one of the following formats: v COL op value op is =, <>, >, >=, <, or <=. value is a constant, host variable, or special register. v COL (NOT) BETWEEN value1 AND value2 v COL1=COL2 (for single-table or inner join queries only) Example of transitive closure for an inner join: Suppose that you have written this query, which meets the conditions for transitive closure:
SELECT * FROM T1, T2 WHERE T1.C1=T2.C1 AND T1.C1>10;
DB2 generates an additional predicate to produce this query, which is more efficient:
SELECT * FROM T1, T2 WHERE T1.C1=T2.C1 AND T1.C1>10 AND T2.C1>10;
| | | | | | | | | | | | | | | | | | | | | | | | |
Example of transitive closure for an outer join: Suppose that you have written this outer join query:
SELECT * FROM (SELECT T1.C1 FROM T1 WHERE T1.C1>10) X LEFT JOIN (SELECT T2.C1 FROM T2) Y ON X.C1 = Y.C1;
The before join predicate, T1.C1>10, meets the conditions for transitive closure, so DB2 generates a query that has the same result as this more-efficient query:
SELECT * FROM (SELECT T1.C1 FROM T1 WHERE T1.C1>10) X LEFT JOIN (SELECT T2.C1 FROM T2 WHERE T2.C1>10) Y ON X.C1 = Y.C1;
Predicate redundancy: A predicate is redundant if evaluation of other predicates in the query already determines the result that the predicate provides. You can specify redundant predicates or DB2 can generate them. DB2 does not determine that any of your query predicates are redundant. All predicates that you code are evaluated at execution time regardless of whether they are redundant. If DB2 generates a redundant predicate to help select access paths, that predicate is ignored at execution. Adding extra predicates: DB2 performs predicate transitive closure only on equal and range predicates. However, you can help DB2 to choose a better access path by adding transitive closure predicates for other types of operators, such as IN or LIKE. For example, consider the following SELECT statement:
778
Administration Guide
| | | | | | | | | | | | | | | | | |
If T1.C1=T2.C1 is true, and T1.C1 LIKE 'A%' is true, then T2.C1 LIKE 'A%' must also be true. Therefore, you can give DB2 extra information for evaluating the query by adding T2.C1 LIKE 'A%':
SELECT * WHERE AND AND FROM T1,T2 T1.C1=T2.C1 T1.C1 LIKE 'A%' T2.C1 LIKE 'A%';
REOPT(ONCE)
779
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
REOPT(NONE)
DB2 determines the access path at bind time, and does not change the access path at run time.
If you specify the bind option VALIDATE(RUN), and a statement in the plan or package is not bound successfully, that statement is incrementally bound at run time. If you also specify the bind option REOPT(ALWAYS), DB2 reoptimizes the access path during the incremental bind. Example: To determine which plans and packages have statements that will be incrementally bound, execute the following SELECT statements:
780
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
SELECT DISTINCT NAME FROM SYSIBM.SYSSTMT WHERE STATUS = 'F' OR STATUS = 'H'; SELECT DISTINCT COLLID, NAME, VERSION FROM SYSIBM.SYSPACKSTMT WHERE STATUS = 'F' OR STATUS = 'H';
781
| | | | | | | | | | | | | | | | | | | |
FROM SYSIBM.SYSPACKSTMT WHERE STATUS IN ('J') ORDER BY COLLID, NAME, VERSION, STMTNUM, SEQNO;
If you specify the bind option VALIDATE(RUN), and a statement in the plan or package is not bound successfully, that statement is incrementally bound at run time. Example: To determine which plans and packages have statements that will be incrementally bound, execute the following SELECT statements:
SELECT DISTINCT NAME FROM SYSIBM.SYSSTMT WHERE STATUS = 'F' OR STATUS = 'H'; SELECT DISTINCT COLLID, NAME, VERSION FROM SYSIBM.SYSPACKSTMT WHERE STATUS = 'F' OR STATUS = 'H';
Assumptions: Because the column SEX has only two different values, 'M' and 'F', the value COLCARDF for SEX is 2. If the numbers of male and female employees
782
Administration Guide
are not equal, the actual filter factor of 1/2 is larger or smaller than the default, depending on whether :HV1 is set to 'M' or 'F'. Recommendation: One of these two actions can improve the access path: v Bind the package or plan that contains the query with the REOPT(ALWAYS) bind option. This action causes DB2 to reoptimize the query at run time, using the input values you provide. You might also consider binding the package or plan with the REOPT(ONCE) bind option. v Write predicates to influence the DB2 selection of an access path, based on your knowledge of actual filter factors. For example, you can break the query into three different queries, two of which use constants. DB2 can then determine the exact filter factor for most cases when it binds the plan.
SELECT (HV1); WHEN ('M') DO; EXEC SQL SELECT * FROM DSN8810.EMP WHERE SEX = 'M'; END; WHEN ('F') DO; EXEC SQL SELECT * FROM DSN8810.EMP WHERE SEX = 'F'; END; OTHERWISE DO: EXEC SQL SELECT * FROM DSN8810.EMP WHERE SEX = :HV1; END; END;
Example 2: Known ranges Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2. Query:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 AND C2 BETWEEN :HV3 AND :HV4;
Assumptions: You know that: v The application always provides a narrow range on C1 and a wide range on C2. v The desired access path is through index T1X1. Recommendation: If DB2 does not choose T1X1, rewrite the query as follows, so that DB2 does not choose index T1X2 on C2:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1);
Example 3: Variable ranges Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2. Query:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 AND C2 BETWEEN :HV3 AND :HV4;
783
Assumptions: You know that the application provides both narrow and wide ranges on C1 and C2. Hence, default filter factors do not allow DB2 to choose the best access path in all cases. For example, a small range on C1 favors index T1X1 on C1, a small range on C2 favors index T1X2 on C2, and wide ranges on both C1 and C2 favor a table space scan. Recommendation: If DB2 does not choose the best access path, try either of the following changes to your application: v Use a dynamic SQL statement and embed the ranges of C1 and C2 in the statement. With access to the actual range values, DB2 can estimate the actual filter factors for the query. Preparing the statement each time it is executed requires an extra step, but it can be worthwhile if the query accesses a large amount of data. v Include some simple logic to check the ranges of C1 and C2, and then execute one of these static SQL statements, based on the ranges of C1 and C2:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1); SELECT * FROM T1 WHERE C2 BETWEEN :HV3 AND :HV4 AND (C1 BETWEEN :HV1 AND :HV2 OR 0=1); SELECT * FROM T1 WHERE (C1 BETWEEN :HV1 AND :HV2 OR 0=1) AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1);
Example 4: ORDER BY Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2. Query:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 ORDER BY C2;
In this example, DB2 could choose one of the following actions: v Scan index T1X1 and then sort the results by column C2 v Scan the table space in which T1 resides and then sort the results by column C2 v Scan index T1X2 and then apply the predicate to each row of data, thereby avoiding the sort Which choice is best depends on the following factors: v The number of rows that satisfy the range predicate v The cluster ratio of the indexes If the actual number of rows that satisfy the range predicate is significantly different from the estimate, DB2 might not choose the best access path. Assumptions: You disagree with the DB2 choice. Recommendation: In your application, use a dynamic SQL statement and embed the range of C1 in the statement. That allows DB2 to use the actual filter factor rather than the default, but requires extra processing for the PREPARE statement. Example 5: A join operation Tables A, B, and C each have indexes on columns C1, C2, C3, and C4.
784
Administration Guide
Assumptions: The actual filter factors on table A are much larger than the default factors. Hence, DB2 underestimates the number of rows selected from table A and wrongly chooses that as the first table in the join. Recommendations: You can: v Reduce the estimated size of Table A by adding predicates v Disfavor any index on the join column by making the join predicate on table A nonindexable Example: The following query illustrates the second of those choices.
SELECT * FROM T1 A, T1 B, WHERE (A.C1 = B.C1 AND A.C2 = C.C2 AND A.C2 BETWEEN AND A.C3 BETWEEN AND A.C4 < :HV5 AND B.C2 BETWEEN AND B.C3 < :HV8 AND C.C2 < :HV9; T1 C OR 0=1) :HV1 AND :HV2 :HV3 AND :HV4 :HV6 AND :HV7
The result of making the join predicate between A and B a nonindexable predicate (which cannot be used in single index access) disfavors the use of the index on column C1. This, in turn, might lead DB2 to access table A or B first. Or, it might lead DB2 to change the access type of table A or B, thereby influencing the join sequence of the other tables.
Correlated subqueries
Definition: A correlated subquery refers to at least one column of the outer query. Any predicate that contains a correlated subquery is a stage 2 predicate unless it is transformed to a join.
Chapter 30. Tuning your queries
785
Example: In the following query, the correlation name, X, illustrates the subquerys reference to the outer query block.
SELECT * FROM DSN8810.EMP X WHERE JOB = 'DESIGNER' AND EXISTS (SELECT 1 FROM DSN8810.PROJ WHERE DEPTNO = X.WORKDEPT AND MAJPROJ = 'MA2100');
What DB2 does: A correlated subquery is evaluated for each qualified row of the outer query that is referred to. In executing the example, DB2: 1. Reads a row from table EMP where JOB=DESIGNER. 2. Searches for the value of WORKDEPT from that row, in a table stored in memory. The in-memory table saves executions of the subquery. If the subquery has already been executed with the value of WORKDEPT, the result of the subquery is in the table and DB2 does not execute it again for the current row. Instead, DB2 can skip to step 5. 3. Executes the subquery, if the value of WORKDEPT is not in memory. That requires searching the PROJ table to check whether there is any project, where MAJPROJ is MA2100, for which the current WORKDEPT is responsible. 4. Stores the value of WORKDEPT and the result of the subquery in memory. 5. Returns the values of the current row of EMP to the application. DB2 repeats this whole process for each qualified row of the EMP table. Notes on the in-memory table: The in-memory table is applicable if the operator of the predicate that contains the subquery is one of the following operators: <, <=, >, >=, =, <>, EXISTS, NOT EXISTS The table is not used, however, if: v There are more than 16 correlated columns in the subquery v The sum of the lengths of the correlated columns is more than 256 bytes v There is a unique index on a subset of the correlated columns of a table from the outer query The in-memory table is a wrap-around table and does not guarantee saving the results of all possible duplicated executions of the subquery.
Noncorrelated subqueries
Definition: A noncorrelated subquery makes no reference to outer queries. Example:
SELECT * FROM DSN8810.EMP WHERE JOB = 'DESIGNER' AND WORKDEPT IN (SELECT DEPTNO FROM DSN8810.PROJ WHERE MAJPROJ = 'MA2100');
What DB2 does: A noncorrelated subquery is executed once when the cursor is opened for the query. What DB2 does to process it depends on whether it returns a single value or more than one value. The query in the preceding example can return more than one value.
786
Administration Guide
Single-value subqueries
When the subquery is contained in a predicate with a simple operator, the subquery is required to return 1 or 0 rows. The simple operator can be one of the following operators: <, <=, >, >=, =, <>, NOT <, NOT <=, NOT >, NOT >= The following noncorrelated subquery returns a single value:
SELECT FROM WHERE AND * DSN8810.EMP JOB = 'DESIGNER' WORKDEPT <= (SELECT MAX(DEPTNO) FROM DSN8810.PROJ);
What DB2 does: When the cursor is opened, the subquery executes. If it returns more than one row, DB2 issues an error. The predicate that contains the subquery is treated like a simple predicate with a constant specified, for example, WORKDEPT <= value. | | | Stage 1 and stage 2 processing: The rules for determining whether a predicate with a noncorrelated subquery that returns a single value is stage 1 or stage 2 are generally the same as for the same predicate with a single variable.
Multiple-value subqueries
A subquery can return more than one value if the operator is one of the following: op ANY, op ALL , op SOME, IN, EXISTS where op is any of the operators >, >=, <, <=, NOT <, NOT <=, NOT >, NOT >=. What DB2 does: If possible, DB2 reduces a subquery that returns more than one row to one that returns only a single row. That occurs when there is a range comparison along with ANY, ALL, or SOME. The following query is an example:
SELECT * FROM DSN8810.EMP WHERE JOB = 'DESIGNER' AND WORKDEPT <= ANY (SELECT DEPTNO FROM DSN8810.PROJ WHERE MAJPROJ = 'MA2100');
DB2 calculates the maximum value for DEPTNO from table DSN8810.PROJ and removes the ANY keyword from the query. After this transformation, the subquery is treated like a single-value subquery. That transformation can be made with a maximum value if the range operator is: v > or >= with the quantifier ALL v < or <= with the quantifier ANY or SOME The transformation can be made with a minimum value if the range operator is: v < or <= with the quantifier ALL v > or >= with the quantifier ANY or SOME The resulting predicate is determined to be stage 1 or stage 2 by the same rules as for the same predicate with a single-valued subquery. When a subquery is sorted: A noncorrelated subquery is sorted when the comparison operator is IN, NOT IN, = ANY, <> ANY, = ALL, or <> ALL. The sort enhances the predicate evaluation, reducing the amount of scanning on the
787
subquery result. When the value of the subquery becomes smaller or equal to the expression on the left side, the scanning can be stopped and the predicate can be determined to be true or false. When the subquery result is a character data type and the left side of the predicate is a datetime data type, then the result is placed in a work file without sorting. For some noncorrelated subqueries that use IN, NOT IN, = ANY, <> ANY, = ALL, or <> ALL comparison operators, DB2 can more accurately pinpoint an entry point into the work file, thus further reducing the amount of scanning that is done. Results from EXPLAIN: For information about the result in a plan table for a subquery that is sorted, see When are aggregate functions evaluated? (COLUMN_FN_EVAL) on page 950.
| | | | | |
788
Administration Guide
v The SELECT clause of the subquery does not contain a user-defined function with an external action or a user-defined function that modifies data. v The subquery predicate is a Boolean term predicate. v The predicates in the subquery that provide correlation are stage 1 predicates. v The subquery does not contain nested subqueries. v The subquery does not contain a self-referencing UPDATE or DELETE. v For a SELECT statement, the query does not contain the FOR UPDATE OF clause. v For an UPDATE or DELETE statement, the statement is a searched UPDATE or DELETE. v For a SELECT statement, parallelism is not enabled. For a statement with multiple subqueries, DB2 does the transformation only on the last subquery in the statement that qualifies for transformation. Example: The following subquery can be transformed into a join because it meets the first set of conditions for transformation:
SELECT * FROM EMP WHERE DEPTNO IN (SELECT DEPTNO FROM DEPT WHERE LOCATION IN ('SAN JOSE', 'SAN FRANCISCO') AND DIVISION = 'MARKETING');
If there is a department in the marketing division which has branches in both San Jose and San Francisco, the result of the SQL statement is not the same as if a join were done. The join makes each employee in this department appear twice because it matches once for the department of location San Jose and again of location San Francisco, although it is the same department. Therefore, it is clear that to transform a subquery into a join, the uniqueness of the subquery select list must be guaranteed. For this example, a unique index on any of the following sets of columns would guarantee uniqueness: v (DEPTNO) v (DIVISION, DEPTNO) v (DEPTNO, DIVISION). The resultant query is:
SELECT EMP.* FROM EMP, DEPT WHERE EMP.DEPTNO = DEPT.DEPTNO AND DEPT.LOCATION IN ('SAN JOSE', 'SAN FRANCISCO') AND DEPT.DIVISION = 'MARKETING';
Example: The following subquery can be transformed into a join because it meets the second set of conditions for transformation:
UPDATE T1 SET T1.C1 = 1 WHERE T1.C1 =ANY (SELECT T2.C1 FROM T2 WHERE T2.C2 = T1.C2);
Results from EXPLAIN: For information about the result in a plan table for a subquery that is transformed into a join operation, see Is a subquery transformed into a join? on page 950.
789
Subquery tuning
The following three queries all retrieve the same rows. All three retrieve data about all designers in departments that are responsible for projects that are part of major project MA2100. These three queries show that there are several ways to retrieve a desired result. Query A: A join of two tables
SELECT DSN8810.EMP.* FROM DSN8810.EMP, DSN8810.PROJ WHERE JOB = 'DESIGNER' AND WORKDEPT = DEPTNO AND MAJPROJ = 'MA2100';
If you need columns from both tables EMP and PROJ in the output, you must use a join. PROJ might contain duplicate values of DEPTNO in the subquery, so that an equivalent join cannot be written. In general, query A might be the one that performs best. However, if there is no index on DEPTNO in table PROJ, then query C might perform best. The IN-subquery predicate in query C is indexable. Therefore, if an index on WORKDEPT exists, DB2 might do IN-list access on table EMP. If you decide that a join cannot be used and there is an available index on DEPTNO in table PROJ, then query B might perform best. When looking at a problem subquery, see if the query can be rewritten into another format or see if there is an index that you can create to help improve the performance of the subquery. Knowing the sequence of evaluation is important, for the different subquery predicates and for all other predicates in the query. If the subquery predicate is costly, perhaps another predicate could be evaluated before that predicate so that the rows would be rejected before even evaluating the problem subquery predicate.
790
Administration Guide
processing than non-scrollable cursors. If your applications require large result tables or you only need to move sequentially forward through the data, use non-scrollable cursors. v Declare scrollable cursors as SENSITIVE only if you need to see the latest data. If you do not need to see updates that are made by other cursors or application processes, using a cursor that you declare as INSENSITIVE requires less processing by DB2. If you need to see only some of the latest updates, and you do not need to see the results of insert operations, declare scrollable cursors as SENSITIVE STATIC. See Chapter 5 of DB2 SQL Reference for information about which updates you can see with a scrollable cursor that is declared as SENSITIVE STATIC. If you need to see all of the latest updates and inserts, declare scrollable cursors as SENSITIVE DYNAMIC. v To ensure maximum concurrency when you use a scrollable cursor for positioned update and delete operations, specify ISOLATION(CS) and CURRENTDATA(NO) when you bind packages and plans that contain updatable scrollable cursors. See Chapter 31, Improving concurrency, on page 813 for more details. v Use the FETCH FIRST n ROWS ONLY clause with scrollable cursors when it is appropriate. In a distributed environment, when you need to retrieve a limited number of rows, FETCH FIRST n ROWS ONLY can improve your performance for distributed queries that use DRDA access by eliminating unneeded network traffic. See Part 4 of DB2 Application Programming and SQL Guide for more information. In a local environment, if you need to scroll through a limited subset of rows in a table, you can use FETCH FIRST n ROWS ONLY to make the result table smaller. v In a distributed environment, if you do not need to use your scrollable cursors to modify data, do your cursor processing in a stored procedure. Using stored procedures can decrease the amount of network traffic that your application requires. v In a TEMP database, create table spaces that are large enough for processing your scrollable cursors. DB2 uses declared temporary tables for processing the following types of scrollable cursors: SENSITIVE STATIC SCROLL INSENSITIVE SCROLL ASENSITIVE SCROLL, if the cursor sensitivity is INSENSITIVE. A cursor that meets the criteria for a read-only cursor has an effective sensitivity of INSENSITIVE. See the DECLARE CURSOR statement in DB2 SQL Referencefor more information about cursor sensitivity. See DB2 Installation Guide for more information about calculating the appropriate size for declared temporary tables for cursors. v Remember to commit changes often for the following reasons: You frequently need to leave scrollable cursors open longer than non-scrollable cursors. There is an increased chance of deadlocks with scrollable cursors because scrollable cursors allow rows to be accessed and updated in any order. Frequent commits can decrease the chances of deadlocks.
| | | | | |
# # # # # # # # # # # #
| | |
791
# # # # # # # # # # # | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
To prevent cursors from closing after commit operations, declare your scrollable cursors WITH HOLD. v While sensitive static sensitive scrollable cursors are open against a table, DB2 will disallow reuse of space in that table space to prevent the scrollable cursor from fetching newly inserted rows that were not in the original result set. Although this is normal, it can result in a seemingly false out-of-space indication. The problem can be more noticeable in a data sharing environment with transactions that access LOBs. Consider the following preventive measures: Check applications such that they commit frequently Close sensitive scrollable cursors when no longer needed Remove WITH HOLD parm for the sensitive scrollable cursor, if possible Isolate LOB table spaces in a dedicated bufferpool in the data sharing environment
Now suppose that you want to execute the following query against table Q1:
SELECT CUSTNO, PURCH_AMT FROM Q1 WHERE STATE = 'CA';
792
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Because the predicate is based only on values of a DPSI key (STATE), DB2 must examine all partitions to find the matching rows. Now suppose that you modify the query in the following way:
SELECT CUSTNO, PURCH_AMT FROM Q1 WHERE DATE BETWEEN '2002-01-01' AND '2002-01-31' AND STATE = 'CA';
Because the predicate is now based on values of a partitioning index key (DATE) and on values of a DPSI key (STATE), DB2 can eliminate the scanning of data partitions 2 and 3, which do not satisfy the query for the partitioning key. This can be determined at bind time because the columns of the predicate are compared to constants. Now suppose that you use host variables instead of constants in the same query:
SELECT CUSTNO, PURCH_AMT FROM Q1 WHERE DATE BETWEEN :hv1 AND :hv2 AND STATE = :hv3;
DB2 can use the predicate on the partitioning column to eliminate the scanning of unneeded partitions at run time. Writing queries to take advantage of limited partition scan is especially useful when a correlation exists between columns that are in a partitioning index and columns that are in a DPSI. For example, suppose that you create table Q2, with partitioning index DATE_IX and DPSI ORDERNO_IX:
CREATE TABLESPACE TS2 NUMPARTS 3; CREATE TABLE Q2 (DATE DATE, ORDERNO CHAR(8), STATE CHAR(2), PURCH_AMT DECIMAL(9,2)) IN TS2 PARTITION BY (DATE) (PARTITION 1 ENDING AT ('2000-12-31'), PARTITION 2 ENDING AT ('2001-12-31'), PARTITION 3 ENDING AT ('2002-12-31')); CREATE INDEX DATE_IX ON Q2 (DATE) PARTITIONED CLUSTER; CREATE INDEX ORDERNO_IX ON Q2 (ORDERNO) PARTITIONED;
Also suppose that the first 4 bytes of each ORDERNO column value represent the four-digit year in which the order is placed. This means that the DATE column and the ORDERNO column are correlated. To take advantage of limited partition scan, when you write a query that has the ORDERNO column in the predicate, also include the DATE column in the predicate. The partitioning index on DATE lets DB2 eliminate the scanning of partitions that are not needed to satisfy the query. For example:
SELECT ORDERNO, PURCH_AMT FROM Q2 WHERE ORDERNO BETWEEN '2002AAAA' AND '2002ZZZZ' AND DATE BETWEEN '2002-01-01' AND '2002-12-31';
793
794
Administration Guide
time, TCB time, or number of getpage requests increases sharply without a corresponding increase in the SQL activity, then there could be a problem. You can use OMEGAMON Online Monitor to track events after your changes have been implemented, providing immediate feedback on the effects of your changes. v Specify the bind option EXPLAIN. You can also use the EXPLAIN option when you bind or rebind a plan or package. Compare the new plan or package for the statement to the old one. If the new one has a table space scan or a nonmatching index space scan, but the old one did not, the problem is probably the statement. Investigate any changes in access path in the new plan or package; they could represent performance improvements or degradations. If neither the accounting report ordered by PLANNAME or PACKAGE nor the EXPLAIN statement suggest corrective action, use the OMEGAMON SQL activity reports for additional information. For more information on using EXPLAIN, see Obtaining PLAN_TABLE information from EXPLAIN on page 932.
Interaction between OPTIMIZE FOR n ROWS and FETCH FIRST n ROWS ONLY: In general, if you specify FETCH FIRST n ROWS ONLY but not OPTIMIZE FOR n ROWS in a SELECT statement, DB2 optimizes the query as if you had specified OPTIMIZE FOR n ROWS. | | | | | | | | When both the FETCH FIRST n ROWS ONLY clause and the OPTIMIZE FOR n ROWS clause are specified, the value for the OPTIMIZE FOR n ROWS clause is used for access path selection. Example: Suppose that you submit the following SELECT statement:
SELECT * FROM EMP FETCH FIRST 5 ROWS ONLY OPTIMIZE FOR 20 ROWS;
The OPTIMIZE FOR value of 20 rows is used for access path selection.
795
appropriate for batch environments. However, for interactive SQL applications, such as SPUFI, it is common for a query to define a very large potential result set but retrieve only the first few rows. The access path that DB2 chooses might not be optimal for those interactive applications. This section discusses the use of OPTIMIZE FOR n ROWS to affect the performance of interactive SQL applications. Unless otherwise noted, this information pertains to local applications. For more information on using OPTIMIZE FOR n ROWS in distributed applications, see Part 4 of DB2 Application Programming and SQL Guide. What OPTIMIZE FOR n ROWS does: The OPTIMIZE FOR n ROWS clause lets an application declare its intent to do either of these things: v Retrieve only a subset of the result set v Give priority to the retrieval of the first few rows DB2 uses the OPTIMIZE FOR n ROWS clause to choose access paths that minimize the response time for retrieving the first few rows. For distributed queries, the value of n determines the number of rows that DB2 sends to the client on each DRDA network transmission. See Part 4 of DB2 Application Programming and SQL Guide for more information on using OPTIMIZE FOR n ROWS in the distributed environment. Use OPTIMIZE FOR 1 ROW to avoid sorts: You can influence the access path most by using OPTIMIZE FOR 1 ROW. OPTIMIZE FOR 1 ROW tells DB2 to select an access path that returns the first qualifying row quickly. This means that whenever possible, DB2 avoids any access path that involves a sort. If you specify a value for n that is anything but 1, DB2 chooses an access path based on cost, and you wont necessarily avoid sorts. How to specify OPTIMIZE FOR n ROWS for a CLI application: For a Call Level Interface (CLI) application, you can specify that DB2 uses OPTIMIZE FOR n ROWS for all queries. To do that, specify the keyword OPTIMIZEFORNROWS in the initialization file. For more information, see Chapter 3 of DB2 ODBC Guide and Reference. How many rows you can retrieve with OPTIMIZE FOR n ROWS: The OPTIMIZE FOR n ROWS clause does not prevent you from retrieving all the qualifying rows. However, if you use OPTIMIZE FOR n ROWS, the total elapsed time to retrieve all the qualifying rows might be significantly greater than if DB2 had optimized for the entire result set. When OPTIMIZE FOR n ROWS is effective: OPTIMIZE FOR n ROWS is effective only on queries that can be performed incrementally. If the query causes DB2 to gather the whole result set before returning the first row, DB2 ignores the OPTIMIZE FOR n ROWS clause, as in the following situations: v The query uses SELECT DISTINCT or a set function distinct, such as COUNT(DISTINCT C1). v Either GROUP BY or ORDER BY is used, and no index can give the necessary ordering. v A aggregate function and no GROUP BY clause is used. v The query uses UNION.
796
Administration Guide
Example: Suppose that you query the employee table regularly to determine the employees with the highest salaries. You might use a query like this:
SELECT LASTNAME, FIRSTNAME, EMPNO, SALARY FROM EMP ORDER BY SALARY DESC;
An index is defined on column EMPNO, so employee records are ordered by EMPNO. If you have also defined a descending index on column SALARY, that index is likely to be very poorly clustered. To avoid many random, synchronous I/O operations, DB2 would most likely use a table space scan, then sort the rows on SALARY. This technique can cause a delay before the first qualifying rows can be returned to the application. If you add the OPTIMIZE FOR n ROWS clause to the statement, DB2 will probably use the SALARY index directly because you have indicated that you expect to retrieve the salaries of only the 20 most highly paid employees. Example: The following statement uses that strategy to avoid a costly sort operation:
SELECT LASTNAME,FIRSTNAME,EMPNO,SALARY FROM EMP ORDER BY SALARY DESC OPTIMIZE FOR 20 ROWS;
Effects of using OPTIMIZE FOR n ROWS: v The join method could change. Nested loop join is the most likely choice, because it has low overhead cost and appears to be more efficient if you want to retrieve only one row. v An index that matches the ORDER BY clause is more likely to be picked. This is because no sort would be needed for the ORDER BY. v List prefetch is less likely to be picked. v Sequential prefetch is less likely to be requested by DB2 because it infers that you only want to see a small number of rows. v In a join query, the table with the columns in the ORDER BY clause is likely to be picked as the outer table if there is an index on that outer table that gives the ordering needed for the ORDER BY clause. Recommendation: For a local query, specify OPTIMIZE FOR n ROWS only in applications that frequently fetch only a small percentage of the total rows in a query result set. For example, an application might read only enough rows to fill the end user's terminal screen. In cases like this, the application might read the remaining part of the query result set only rarely. For an application like this, OPTIMIZE FOR n ROWS can result in better performance by causing DB2 to favor SQL access paths that deliver the first n rows as fast as possible. When you specify OPTIMIZE FOR n ROWS for a remote query, a small value of n can help limit the number of rows that flow across the network on any given transmission. You can improve the performance for receiving a large result set through a remote query by specifying a large value of n in OPTIMIZE FOR n ROWS. When you specify a large value, DB2 attempts to send the n rows in multiple transmissions. For better performance when retrieving a large result set, in addition to specifying OPTIMIZE FOR n ROWS with a large value of n in your query, do not execute other SQL statements until the entire result set for the query is processed. If
Chapter 30. Tuning your queries
797
retrieval of data for several queries overlaps, DB2 might need to buffer result set data in the DDF address space. See Block fetching result sets on page 1009 for more information. For local or remote queries, to influence the access path most, specify OPTIMIZE for 1 ROW. This value does not have a detrimental effect on distributed queries. | | | | | | | | | | # # # | | | | | | | |
Using the CARDINALITY clause to improve the performance of queries with user-defined table function references
The cardinality of a user-defined table function is the number of rows that are returned when the function is invoked. DB2 uses this number to estimate the cost of executing a query that invokes a user-defined table function. The cost of executing a query is one of the factors that DB2 uses when it calculates the access
798
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
path. Therefore, if you give DB2 an accurate estimate of a user-defined table functions cardinality, DB2 can better calculate the best access path. You can specify a cardinality value for a user-defined table function by using the CARDINALITY clause of the SQL CREATE FUNCTION or ALTER FUNCTION statement. However, this value applies to all invocations of the function, whereas a user-defined table function might return different numbers of rows, depending on the query in which it is referenced. To give DB2 a better estimate of the cardinality of a user-defined table function for a particular query, you can use the CARDINALITY or CARDINALITY MULTIPLIER clause in that query. DB2 uses those clauses at bind time when it calculates the access cost of the user-defined table function. Using this clause is recommended only for programs that run on DB2 UDB for z/OS because the clause is not supported on earlier versions of DB2. Example of using the CARDINALITY clause to specify the cardinality of a user-defined table function invocation: Suppose that when you created user-defined table function TUDF1, you set a cardinality value of 5, but in the following query, you expect TUDF1 to return 30 rows:
SELECT * FROM TABLE(TUDF1(3)) AS X;
Add the CARDINALITY 30 clause to tell DB2 that, for this query, TUDF1 should return 30 rows:
SELECT * FROM TABLE(TUDF1(3) CARDINALITY 30) AS X;
Example of using the CARDINALITY MULTIPLIER clause to specify the cardinality of a user-defined table function invocation: Suppose that when you created user-defined table function TUDF2, you set a cardinality value of 5, but in the following query, you expect TUDF2 to return 30 times that many rows:
SELECT * FROM TABLE(TUDF2(10)) AS X;
Add the CARDINALITY MULTIPLIER 30 clause to tell DB2 that, for this query, TUDF1 should return 5*30, or 150, rows:
SELECT * FROM TABLE(TUDF2(10) CARDINALITY MULTIPLIER 30) AS X;
799
CREATE TABLE PART_HISTORY ( PART_TYPE CHAR(2), IDENTIFIES THE PART TYPE PART_SUFFIX CHAR(10), IDENTIFIES THE PART W_NOW INTEGER, TELLS WHERE THE PART IS W_FROM INTEGER, TELLS WHERE THE PART CAME FROM DEVIATIONS INTEGER, TELLS IF ANYTHING SPECIAL WITH THIS PART COMMENTS CHAR(254), DESCRIPTION CHAR(254), DATE1 DATE, DATE2 DATE, DATE3 DATE); CREATE UNIQUE INDEX IX1 ON PART_HISTORY (PART_TYPE,PART_SUFFIX,W_FROM,W_NOW); CREATE UNIQUE INDEX IX2 ON PART_HISTORY (W_FROM,W_NOW,DATE1); +------------------------------------------------------------------------------+ | Table statistics | Index statistics IX1 IX2 | |--------------------------------+---------------------------------------------| | CARDF 100,000 | FIRSTKEYCARDF 1000 50 | | NPAGES 10,000 | FULLKEYCARDF 100,000 100,000 | | | CLUSTERRATIO 99% 99% | | | NLEAF 3000 2000 | | | NLEVELS 3 3 | |------------------------------------------------------------------------------| | column cardinality HIGH2KEY LOW2KEY | | ------------------------------| | Part_type 1000 'ZZ' 'AA' | | w_now 50 1000 1 | | w_from 50 1000 1 | +------------------------------------------------------------------------------+ Q1: SELECT * FROM PART_HISTORY WHERE PART_TYPE = 'BB' P1 AND W_FROM = 3 P2 AND W_NOW = 3 P3 ----SELECT ALL PARTS THAT ARE 'BB' TYPES THAT WERE MADE IN CENTER 3 AND ARE STILL IN CENTER 3
+------------------------------------------------------------------------------+ | Filter factor of these predicates. | | P1 = 1/1000= .001 | | P2 = 1/50 = .02 | | P3 = 1/50 = .02 | |------------------------------------------------------------------------------| | ESTIMATED VALUES | WHAT REALLY HAPPENS | | filter data | filter data | | index matchcols factor rows | index matchcols factor rows | | ix2 2 .02*.02 40 | ix2 2 .02*.50 1000 | | ix1 1 .001 100 | ix1 1 .001 100 | +------------------------------------------------------------------------------+
DB2 picks IX2 to access the data, but IX1 would be roughly 10 times quicker. The problem is that 50% of all parts from center number 3 are still in Center 3; they have not moved. Assume that there are no statistics on the correlated columns in catalog table SYSCOLDIST. Therefore, DB2 assumes that the parts from center number 3 are evenly distributed among the 50 centers. You can get the desired access path by changing the query. To discourage the use of IX2 for this particular query, you can change the third predicate to be nonindexable.
SELECT * FROM PART_HISTORY WHERE PART_TYPE = 'BB' AND W_FROM = 3 AND (W_NOW = 3 + 0)
800
Administration Guide
Now index I2 is not picked, because it has only one match column. The preferred index, I1, is picked. The third predicate is a nonindexable predicate, so an index is not used for the compound predicate. You can make a predicate nonindexable in many ways. The recommended way is to add 0 to a predicate that evaluates to a numeric value or to concatenate an empty string to a predicate that evaluates to a character value.
Indexable T1.C3=T2.C4 T1.C1=5 Nonindexable (T1.C3=T2.C4 CONCAT '') T1.C1=5+0
These techniques do not affect the result of the query and cause only a small amount of overhead. The preferred technique for improving the access path when a table has correlated columns is to generate catalog statistics on the correlated columns. You can do that either by running RUNSTATS or by updating catalog table SYSCOLDIST manually.
801
v As the correlation of columns in the fact table changes, reevaluate the index to determine if columns in the index should be reordered. v Define indexes on dimension tables to improve access to those tables. v When you have executed a number of queries and have more information about the way that the data is used, follow these recommendations: Put more selective columns at the beginning of the index. If a number of queries do not reference a dimension, put the column that corresponds to that dimension at the end of the index. When a fact table has more than one multi-column index and none of those indexes contains all key columns, DB2 evaluates all of the indexes and uses the index that best exploits star join.
D1...Dn Dimension tables. C1...Cn Key columns in the fact table. C1 is joined to dimension D1, C2 is joined to dimension D2, and so on. cardD1...cardDn Cardinality of columns C1...Cn in dimension tables D1...Dn. cardC1...cardCn Cardinality of key columns C1...Cn in fact table F. cardCij Cardinality of pairs of column values from key columns Ci and Cj in fact table F. cardCijk Cardinality of triplets of column values from key columns Ci, Cj, and Ck in fact table F. Density A measure of the correlation of key columns in the fact table. The density is calculated as follows: For a single column cardCicardDi For pairs of columns cardCij(cardDi*cardDj) For triplets of columns cardCijk(cardDi*cardDj*cardDk) S The current set of columns whose order in the index is not yet determined.
S-{Cm} The current set of columns, excluding column Cm Follow these steps to derive a fact table index for a star-join query that joins n columns of fact table F to n dimension tables D1 through Dn:
802
Administration Guide
1. Define the set of columns whose index key order is to be determined as the n columns of fact table F that correspond to dimension tables. That is, S={C1,...Cn} and L=n. 2. Calculate the density of all sets of L-1 columns in S. 3. Find the lowest density. Determine which column is not in the set of columns with the lowest density. That is, find column Cm in S, such that for every Ci in S, density(S-{Cm})<density(S-{Ci}). 4. Make Cm the Lth column of the index. 5. Remove Cm from S. 6. Decrement L by 1. 7. Repeat steps 2 through 6 n-2 times. The remaining column after iteration n-2 is the first column of the index. Example of determining column order for a fact table index: Suppose that a star schema has three dimension tables with the following cardinalities:
cardD1=2000 cardD2=500 cardD3=100
Now suppose that the cardinalities of single columns and pairs of columns in the fact table are:
cardC1=2000 cardC2=433 cardC3=100 cardC12=625000 cardC13=196000 cardC23=994
Determine the best multi-column index for this star schema. Step 1: Calculate the density of all pairs of columns in the fact table:
density(C1,C2)=625000(2000*500)=0.625 density(C1,C3)=196000(2000*100)=0.98 density(C2,C3)=994(500*100)=0.01988
Step 2: Find the pair of columns with the lowest density. That pair is (C2,C3). Determine which column of the fact table is not in that pair. That column is C1. Step 3: Make column C1 the third column of the index. Step 4: Repeat steps 1 through 3 to determine the second and first columns of the index key:
density(C2)=433500=0.866 density(C3)=100100=1.0
The column with the lowest density is C2. Therefore, C3 is the second column of the index. The remaining column, C2, is the first column of the index. That is, the best order for the multi-column index is C2, C3, C1.
803
think that the join sequence is inefficient, try rearranging the order of the tables and views in the FROM clause to match a join sequence that might perform better.
This query has a problem with data correlation. DB2 does not know that 50% of the parts that were made in Center 3 are still in Center 3. The problem was circumvented by making a predicate nonindexable. But suppose that hundreds of users are writing queries similar to that query. Having all users change their queries would be impossible. In this type of situation, the best solution is to change the catalog statistics. For the query in Figure 82 on page 800, you can update the catalog statistics in one of two ways: v Run the RUNSTATS utility, and request statistics on the correlated columns W_FROM and W_NOW. This is the preferred method. See Gathering monitor statistics and update statistics on page 916 and Part 2 of DB2 Utility Guide and Referencefor more information. v Update the catalog statistics manually. Updating the catalog to adjust for correlated columns: One catalog table that you can update is SYSIBM.SYSCOLDIST, which gives information about a column or set of columns in a table. Assume that because columns W_NOW and W_FROM are correlated, only 100 distinct values exist for the combination of the two columns, rather than 2500 (50 for W_FROM * 50 for W_NOW). Insert a row like this to indicate the new cardinality:
INSERT INTO SYSIBM.SYSCOLDIST (FREQUENCY, FREQUENCYF, IBMREQD, TBOWNER, TBNAME, NAME, COLVALUE, TYPE, CARDF, COLGROUPCOLNO, NUMCOLUMNS) VALUES(0, -1, 'N', 'USRT001','PART_HISTORY','W_FROM',' ', 'C',100,X'00040003',2);
| |
804
Administration Guide
You can also use the RUNSTATS utility to put this information in SYSCOLDIST. See DB2 Utility Guide and Reference for more information. You tell DB2 about the frequency of a certain combination of column values by updating SYSIBM.SYSCOLDIST. For example, you can indicate that 1% of the rows in PART_HISTORY contain the values 3 for W_FROM and 3 for W_NOW by inserting this row into SYSCOLDIST:
INSERT INTO SYSIBM.SYSCOLDIST (FREQUENCY, FREQUENCYF, STATSTIME, IBMREQD, TBOWNER, TBNAME, NAME, COLVALUE, TYPE, CARDF, COLGROUPCOLNO, NUMCOLUMNS) VALUES(0, .0100, '1996-12-01-12.00.00.000000','N', 'USRT001','PART_HISTORY','W_FROM',X'00800000030080000003', 'F',-1,X'00040003',2);
Updating the catalog for joins with table functions: Updating catalog statistics might cause extreme performance problems if the statistics are not updated correctly. Monitor performance, and be prepared to reset the statistics to their original values if performance problems arise.
| |
# # #
Tables with default statistics for NPAGES (NPAGES =-1) are presumed to have 501 pages. For such tables, DB2 will favor matching index access only when NPGTHRSH is set above 501.
805
| | | | | | | | | | | | | | | | |
Recommendation: Before you use NPGTHRSH, be aware that in some cases, matching index access can be more costly than a table space scan or nonmatching index access. Specify a small value for NPGTHRSH (10 or less), which limits the number of tables for which DB2 favors matching index access. If you need to use matching index access only for specific tables, create or alter those tables with the VOLATILE parameter, rather than using the system-wide NPGTHRSH parameter. See Favoring index access on page 798.
806
Administration Guide
2. For best performance, create an ascending index on the following columns of PLAN_TABLE: v QUERYNO v APPLNAME v PROGNAME v VERSION v COLLID v OPTHINT The DB2 sample library, in member DSNTESC, contains an appropriate CREATE INDEX statement that you can modify.
For more information about reasons to use the QUERYNO clause, see Reasons to use the QUERYNO clause on page 810. 2. Make the PLAN_TABLE rows for that query (QUERYNO=100) into a hint by updating the OPTHINT column with the name you want to call the hint. In this case, the name is OLDPATH:
UPDATE PLAN_TABLE SET OPTHINT = 'OLDPATH' WHERE QUERYNO = 100 AND APPLNAME = ' ' AND PROGNAME = 'DSNTEP2' AND VERSION = '' AND COLLID = 'DSNTEP2';
3. Tell DB2 to use the hint, and indicate in the PLAN_TABLE that DB2 used the hint. v For dynamic SQL statements in the program, follow these steps: a. Execute the SET CURRENT OPTIMIZATION HINT statement in the program to tell DB2 to use OLDPATH. For example:
SET CURRENT OPTIMIZATION HINT = 'OLDPATH';
807
If you do not explicitly set the CURRENT OPTIMIZATION HINT special register, the value that you specify for the bind option OPTHINT is used. If you execute the SET CURRENT OPTIMIZATION HINT statement statically, rebind the plan or package to pick up the SET CURRENT OPTIMIZATION HINT statement. b. Execute the EXPLAIN statement on the SQL statements for which you have instructed DB2 to use OLDPATH. This step adds rows to the PLAN_TABLE for those statements. The rows contain a value of OLDPATH in the HINT_USED column. # # # # # # # # # # # # # If DB2 uses all of the hints that you provided, it returns SQLCODE +394 from the PREPARE of the EXPLAIN statement and from the PREPARE of SQL statements that use the hints. If any of your hints are invalid, or if any duplicate hints were found, DB2 issues SQLCODE +395. If DB2 does not find an optimization hint, DB2 returns another SQLCODE. Usually, this SQLCODE is 0. If the dynamic statement cache is enabled, DB2 includes the value of the CURRENT OPTIMIZATION HINT special register when looking for a match in the dynamic statement cache. If the execution of the statement results in a cache hit, the query does not go through prepare and DB2 uses the cached plan. If no match is found for the statement in the dynamic statement cache, the query goes through prepare, and DB2 considers the optimization hints. v For static SQL statements in the program, rebind the plan or package that contains the statements. Specify bind options EXPLAIN(YES) and OPTHINT('OLDPATH') to add rows for those statements in the PLAN_TABLE that contain a value of OLDPATH in the HINT_USED column. If DB2 uses the hint you provided, it returns SQLCODE +394 from the rebind. If your hints are invalid, DB2 issues SQLCODE +395. If DB2 does not find an optimization hint, it returns another SQLCODE. Usually, this SQLCODE is 0. 4. Select from PLAN_TABLE to see what was used:
SELECT * FROM PLAN_TABLE WHERE QUERYNO = 100 ORDER BY TIMESTAMP, QUERYNO, QBLOCKNO, PLANNO, MIXOPSEQ;
| | |
The PLAN_TABLE in Table 132 shows the OLDPATH hint, indicated by a value in OPTHINT and it also shows that DB2 used that hint, indicated by OLDPATH in the HINT_USED column.
Table 132. PLAN_TABLE that shows that the OLDPATH optimization hint is used. QUERYNO 100 100 100 100 100 100 METHOD 0 4 3 0 4 3 EMP EMPPROJECT TNAME EMP EMPPROJACT OPTHINT OLDPATH OLDPATH OLDPATH OLDPATH OLDPATH OLDPATH HINT_USED
808
Administration Guide
2. Make the PLAN_TABLE rows into a hint by updating the OPTHINT column with the name you want to call the hint. In this case, the name is NOHYB:
UPDATE PLAN_TABLE SET OPTHINT = 'NOHYB' WHERE QUERYNO = 200 AND APPLNAME = ' ' AND PROGNAME = 'DSNTEP2' AND VERSION = '' AND COLLID = 'DSNTEP2';
3. Change the access path so that merge scan join is used rather than hybrid join:
UPDATE PLAN_TABLE SET METHOD = 2 WHERE QUERYNO = 200 AND APPLNAME = ' ' AND PROGNAME = 'DSNTEP2' AND VERSION = '' AND COLLID = 'DSNTEP2' AND OPTHINT = 'NOHYB' AND METHOD = 4;
4. Tell DB2 to look for the NOHYB hint for this query:
SET CURRENT OPTIMIZATION HINT = 'NOHYB';
The PLAN_TABLE in Table 133 shows the NOHYB hint, indicated by a value in OPTHINT and it also shows that DB2 used that hint, indicated by NOHYB in the HINT_USED column.
Table 133. PLAN_TABLE that shows that the NOHYB optimization hint is used. QUERYNO 200 200 200 METHOD 0 2 3 TNAME EMP EMPPROJACT OPTHINT NOHYB NOHYB NOHYB
Chapter 30. Tuning your queries
HINT_USED
809
Table 133. PLAN_TABLE that shows that the NOHYB optimization hint is used. (continued) QUERYNO 200 200 200 METHOD 0 2 3 TNAME EMP EMPPROJECT OPTHINT HINT_USED NOHYB NOHYB NOHYB
810
Administration Guide
Table 134. PLAN_TABLE columns that DB2 validates Column METHOD Correct values or other explanation Must be 0, 1, 2, 3, or 4. Any other value invalidates the hints. See Interpreting access to two or more tables (join) on page 959 for more information about join methods. Must be specified and must name a table, materialized view, materialized nested table expression. Blank if method is 3. If a table is named that does not exist or is not involved in the query, then the hints are invalid. Required only if CREATOR, TNAME, and CORRELATION_NAME do not uniquely identify the table. This situation might occur when the same table is used in multiple views (with the same CORRELATION_NAME). This field is ignored if it is not needed.
TABNO
| |
ACCESSTYPE
Must contain I, I1, N, M, R, RW, T, or V. Any other value invalidates the hints. Values of I, I1, and N all mean single index access. DB2 determines which of the three values to use based on the index specified in ACCESSNAME. M indicates multiple index access. DB2 uses only the first row in the authid.PLAN_TABLE for multiple index access (MIXOPSEQ=0). The choice of indexes, and the AND and OR operations involved, is determined by DB2. If multiple index access isnt possible, then the hints are invalidated. See Is access through an index? (ACCESSTYPE is I, I1, N or MX) on page 944 and Is access through more than one index? (ACCESSTYPE=M) on page 944 for more information.
Ignored if ACCESSTYPE is R or M. If ACCESSTYPE is I, I1, or N, then these fields must identify an index on the specified table. If the index doesnt exist, or if the index is defined on a different table, then the hints are invalid. Also, if the specified index cant be used, then the hints are invalid.
Must be Y, N or blank. Any other value invalidates the hints. This value determines if DB2 should sort the new (SORTN_JOIN) or composite (SORTC_JOIN) table. This value is ignored if the specified join method, join sequence, access type and access name dictate whether a sort of the new or composite tables is required. See Are sorts performed? on page 949 for more information.
PREFETCH
Must be S, L or blank. Any other value invalidates the hints. This value determines whether DB2 should use sequential prefetch (S), list prefetch (L), or no prefetch (blank). (A blank does not prevent sequential detection at run time.) This value is ignored if the specified access type and access name dictates the type of prefetch required.
811
Table 134. PLAN_TABLE columns that DB2 validates (continued) Column PAGE_RANGE Correct values or other explanation Must be Y, N or blank. Any other value invalidates the hints. See Was a scan limited to certain partitions? (PAGE_RANGE=Y) on page 948 for more information. This value is used only if it is possible to run the query in parallel; that is, the SET CURRENT DEGREE special register contains ANY, or the plan or package was bound with DEGREE(ANY). If parallelism is possible, this value must be I, C, X or null. All of the restrictions involving parallelism still apply when using access path hints. If the specified mode cannot be performed, the hints are either invalidated or the mode is modified by the optimizer, possibly resulting in the query being run sequentially. If the value is null then the optimizer determines the mode. See Chapter 35, Parallel operations and query performance, on page 991 for more information. ACCESS_DEGREE or JOIN_DEGREE If PARALLELISM_MODE is specified, use this field to specify the degree of parallelism. If you specify a degree of parallelism, this must a number greater than zero, and DB2 might adjust the parallel degree from what you set here. If you want DB2 to determine the degree, do not enter a value in this field. If you specify a value for ACCESS_DEGREE or JOIN_DEGREE, you must also specify a corresponding ACCESS_PGROUP_ID and JOIN_PGROUP_ID. WHEN_OPTIMIZE Must be R, B, or blank. Any other value invalidates the hints. When a statement in a plan that is bound with REOPT(ALWAYS) qualifies for reoptimization at run time, and you have provided optimization hints for that statement, the value of WHEN_OPTIMIZE determines whether DB2 reoptimizes the statement at run time. If the value of WHEN_OPTIMIZE is blank or B, DB2 uses only the access path that is provided by the optimization hints at bind time. If the value of WHEN_OPTIMIZE is R, DB2 determines the access path at bind time using the optimization hints. At run time, DB2 searches the PLAN_TABLE for hints again, and if hints for the statement are still in the PLAN_TABLE and are still valid, DB2 optimizes the access path using those hints again. PRIMARY_ACCESSTYPE Must be D or blank. Any other value invalidates the hints.
PARALLELISM_MODE
812
Administration Guide
813
Suspension
Definition: An application process is suspended when it requests a lock that is already held by another application process and cannot be shared. The suspended process temporarily stops running.
814
Administration Guide
Order of precedence for lock requests: Incoming lock requests are queued. Requests for lock promotion, and requests for a lock by an application process that already holds a lock on the same object, precede requests for locks by new applications. Within those groups, the request order is first in, first out. Example: Using an application for inventory control, two users attempt to reduce the quantity on hand of the same item at the same time. The two lock requests are queued. The second request in the queue is suspended and waits until the first request releases its lock. Effects: The suspended process resumes running when: v All processes that hold the conflicting lock release it. v The requesting process times out or deadlocks and the process resumes to deal with an error condition.
Timeout
Definition: An application process is said to time out when it is terminated because it has been suspended for longer than a preset interval. Example: An application process attempts to update a large table space that is being reorganized by the utility REORG TABLESPACE with SHRLEVEL NONE. It is likely that the utility job will not release control of the table space before the application process times out. Effects: DB2 terminates the process, issues two messages to the console, and returns SQLCODE -911 or -913 to the process (SQLSTATEs '40001' or '57033'). Reason code 00C9008E is returned in the SQLERRD(3) field of the SQLCA. Alternatively, you can use the GET DIAGNOSTICS statement to check the reason code. If statistics trace class 3 is active, DB2 writes a trace record with IFCID 0196. COMMIT and ROLLBACK operations do not time out. The command STOP DATABASE, however, may time out and send messages to the console, but it will retry up to 15 times. For more information about setting the timeout interval, see Installation options for wait times on page 837.
| |
Deadlock
Definition: A deadlock occurs when two or more application processes each hold locks on resources that the others need and without which they cannot proceed. Example: Figure 83 on page 816 illustrates a deadlock between two transactions.
815
Table N (3) Job EMPLJCHG (1) OK Table M 000300 Page B (4) Suspend Job PROJNCHG (2) OK Suspend 000010 Page A
Notes: 1. Jobs EMPLJCHG and PROJNCHG are two transactions. Job EMPLJCHG accesses table M, and acquires an exclusive lock for page B, which contains record 000300. 2. Job PROJNCHG accesses table N, and acquires an exclusive lock for page A, which contains record 000010. 3. Job EMPLJCHG requests a lock for page A of table N while still holding the lock on page B of table M. The job is suspended, because job PROJNCHG is holding an exclusive lock on page A. 4. Job PROJNCHG requests a lock for page B of table M while still holding the lock on page A of table N. The job is suspended, because job EMPLJCHG is holding an exclusive lock on page B. The situation is a deadlock. Figure 83. A deadlock example
| |
Effects: After a preset time interval (the value of DEADLOCK TIME), DB2 can roll back the current unit of work for one of the processes or request a process to terminate. That frees the locks and allows the remaining processes to continue. If statistics trace class 3 is active, DB2 writes a trace record with IFCID 0172. Reason code 00C90088 is returned in the SQLERRD(3) field of the SQLCA. Alternatively, you can use the GET DIAGNOSTICS statement to check the reason code. (The codes that describe DB2s exact response depend on the operating environment; for details, see Part 5 of DB2 Application Programming and SQL Guide.) It is possible for two processes to be running on distributed DB2 subsystems, each trying to access a resource at the other location. In that case, neither subsystem can detect that the two processes are in deadlock; the situation resolves only when one process times out.
816
Administration Guide
Eliminate swapping: If a task is waiting or is swapped out and the unit of work has not been committed, then it still holds locks. When a system is heavily loaded, contention for processing, I/O, and storage can cause waiting. Consider reducing the number of threads or initiators, increasing the priority for the DB2 tasks, and providing more processing, I/O, and real memory.
817
| | | | | | | | | | | | | |
improve index availability, especially for utility processing, partition-level operations (such as dropping or rotating partitions), and recovery of indexes. However, the use of data-partitioned secondary indexes does not always improve the performance of queries. For example, for a query with a predicate that references only the columns of a data-partitioned secondary index, DB2 must probe each partition of the index for values that satisfy the predicate if index access is chosen as the access path. Therefore, take into account data access patterns and maintenance practices when deciding to use a data-partitioned secondary index. Replace a nonpartitioned index with a partitioned index only if there are perceivable benefits such as improved data or index availability, easier data or index maintenance, or improved performance. For examples of how query performance can be improved with data-partitioned secondary indexes, see Writing efficient queries on tables with data-partitioned secondary indexes on page 792. Fewer rows of data per page: By using the MAXROWS clause of CREATE or ALTER TABLESPACE, you can specify the maximum number of rows that can be on a page. For example, if you use MAXROWS 1, each row occupies a whole page, and you confine a page lock to a single row. Consider this option if you have a reason to avoid using row locking, such as in a data sharing environment where row locking overhead can be greater.
| | | | | |
Consider volatile tables to ensure index access: If multiple applications access the same table, consider defining the table as VOLATILE. DB2 uses index access whenever possible for volatile tables, even if index access does not appear to be the most efficient access method because of volatile statistics. Because each application generally accesses the rows in the table in the same order, lock contention can be reduced.
818
Administration Guide
Consider using the UR CHECK FREQ field or the UR LOG WRITE CHECK field of installation panel DSNTIPN to help you identify those applications that are not committing frequently. UR CHECK FREQ, which identifies when too many checkpoints have occurred without a UR issuing a commit, is helpful in monitoring overall system activity. UR LOG WRITE CHECK enables you to detect applications that might write too many log records between commit points, potentially creating a lengthy recovery situation for critical tables. Even though an application might conform to the commit frequency standards of the installation under normal operational conditions, variation can occur based on system workload fluctuations. For example, a low-priority application might issue a commit frequently on a system that is lightly loaded. However, under a heavy system load, the use of the CPU by the application may be pre-empted, and, as a result, the application may violate the rule set by the UR CHECK FREQ parameter. For this reason, add logic to your application to commit based on time elapsed since last commit, and not solely based on the amount of SQL processing performed. In addition, take frequent commit points in a long running unit of work that is read-only to reduce lock contention and to provide opportunities for utilities, such as online REORG, to access the data. Retry an application after deadlock or timeout: Include logic in a batch program so that it retries an operation after a deadlock or timeout. Such a method could help you recover from the situation without assistance from operations personnel. Field SQLERRD(3) in the SQLCA returns a reason code that indicates whether a deadlock or timeout occurred. Alternatively, you can use the GET DIAGNOSTICS statement to check the reason code. Close cursors: If you define a cursor using the WITH HOLD option, the locks it needs can be held past a commit point. Use the CLOSE CURSOR statement as soon as possible in your program to cause those locks to be released and the resources they hold to be freed at the first commit point that follows the CLOSE CURSOR statement. Whether page or row locks are held for WITH HOLD cursors is controlled by the RELEASE LOCKS parameter on installation panel DSNTIP4. Closing cursors is particularly important in a distributed environment. Free locators: If you have executed the HOLD LOCATOR statement, the LOB locator holds locks on LOBs past commit points. Use the FREE LOCATOR statement to release these locks. Bind plans with ACQUIRE(USE): ACQUIRE(USE), which indicates that DB2 will acquire table and table space locks when the objects are first used and not when the plan is allocated, is the best choice for concurrency. Packages are always bound with ACQUIRE(USE), by default. ACQUIRE(ALLOCATE) can provide better protection against timeouts. Consider ACQUIRE(ALLOCATE) for applications that need gross locks instead of intent locks or that run with other applications that may request gross locks instead of intent locks. Acquiring the locks at plan allocation also prevents any one transaction in the application from incurring the cost of acquiring the table and table space locks. If you need ACQUIRE(ALLOCATE), you might want to bind all DBRMs directly to the plan. For information about intent and gross locks, see The mode of a lock on page 825. Bind with ISOLATION(CS) and CURRENTDATA(NO) typically: ISOLATION(CS) lets DB2 release acquired row and page locks as soon as possible.
| |
819
CURRENTDATA(NO) lets DB2 avoid acquiring row and page locks as often as possible. After that, in order of decreasing preference for concurrency, use these bind options: 1. ISOLATION(CS) with CURRENTDATA(YES), when data returned to the application must not be changed before your next FETCH operation. 2. ISOLATION(RS), when data returned to the application must not be changed before your application commits or rolls back. However, you do not care if other application processes insert additional rows. 3. ISOLATION(RR), when data evaluated as the result of a query must not be changed before your application commits or rolls back. New rows cannot be inserted into the answer set. For more information about the ISOLATION option, see The ISOLATION option on page 850. | | | For updatable static scrollable cursors, ISOLATION(CS) provides the additional advantage of letting DB2 use optimistic concurrency control to further reduce the amount of time that locks are held. With optimistic concurrency control, DB2 releases the row or page locks on the base table after it materializes the result table in a temporary global table. DB2 also releases the row lock after each FETCH, taking a new lock on a row only for a positioned update or delete to ensure data integrity. For more information about optimistic concurrency control, see Advantages and disadvantages of the isolation values on page 852. For updatable dynamic scrollable cursors and ISOLATION(CS), DB2 holds row or page locks on the base table (DB2 does not use a temporary global table). The most recently fetched row or page from the base table remains locked to maintain data integrity for a positioned update or delete. Use ISOLATION(UR) cautiously: UR isolation acquires almost no locks on rows or pages. It is fast and causes little contention, but it reads uncommitted data. Do not use it unless you are sure that your applications and end users can accept the logical inconsistencies that can occur. | | | | | | | | | | | | | | | | | | Use sequence objects to generate unique, sequential numbers: Using an identity column is one way to generate unique sequential numbers. However, as a column of a table, an identity column is associated with and tied to the table, and a table can have only one identity column. Your applications might need to use one sequence of unique numbers for many tables or several sequences for each table. As a user-defined object, sequences provide a way for applications to have DB2 generate unique numeric key values and to coordinate the keys across multiple rows and tables. The use of sequences can avoid the lock contention problems that can result when applications implement their own sequences, such as in a one-row table that contains a sequence number that each transaction must increment. With DB2 sequences, many users can access and increment the sequence concurrently without waiting. DB2 does not wait for a transaction that has incremented a sequence to commit before allowing another transaction to increment the sequence again. Examine multi-row operations: In an application, multi-row inserts, positioned updates, and positioned deletes have the potential of expanding the unit of work. This can affect the concurrency of other users accessing the data. Minimize
820
Administration Guide
| |
contention by adjusting the size of the host-variable-array, committing between inserts, updates, and preventing lock escalation. Use global transactions: The Resource Recovery Services attachment facility (RRSAF) relies on an z/OS component called Resource Recovery Services (RRS). RRS provides system-wide services for coordinating two-phase commit operations across z/OS products. For RRSAF applications and IMS transactions that run under RRS, you can group together a number of DB2 agents into a single global transaction. A global transaction allows multiple DB2 agents to participate in a single global transaction and thus share the same locks and access the same data. When two agents that are in a global transaction access the same DB2 object within a unit of work, those agents will not deadlock with each other. The following restrictions apply: v There is no Parallel Sysplex support for global transactions. v Because each of the branches of a global transaction are sharing locks, uncommitted updates issued by one branch of the transaction are visible to other branches of the transaction. v Claim/drain processing is not supported across the branches of a global transaction, which means that attempts to issue CREATE, DROP, ALTER, GRANT, or REVOKE may deadlock or timeout if they are requested from different branches of the same global transaction. v LOCK TABLE may deadlock or timeout across the branches of a global transaction. For information on how to make an agent part of a global transaction for RRSAF applications, see Section 7 of DB2 Application Programming and SQL Guide.
821
As Figure 84 suggests, row locks and page locks occupy an equal place in the hierarchy of lock sizes.
Segmented table space Table space lock Table lock Simple table space Table space lock LOB table space LOB table space lock
Row lock
Page lock
Row lock
Page lock
LOB lock
Row lock
Page lock
Row lock
Page lock
Row lock
Page lock
822
Administration Guide
rows from every table. A lock on a page locks every row in the page, no matter what tables the data belongs to. Thus, a lock needed to access data from one table can make data from other tables temporarily unavailable. That effect can be partly undone by using row locks instead of page locks. v In a segmented table space, rows from different tables are contained in different pages. Locking a page does not lock data from more than one table. Also, DB2 can acquire a table lock, which locks only the data from one specific table. Because a single row, of course, contains data from only one table, the effect of a row lock is the same as for a simple or partitioned table space: it locks one row of data from one table. v In a LOB table space, pages are not locked. Because there is no concept of a row in a LOB table space, rows are not locked. Instead, LOBs are locked. See LOB locks on page 864 for more information.
823
Simple table space: Table space locking Table space lock applies to every table in the table space. User 1 Lock on TS1 Page locking Page lock applies to data from every table on the page. User 1 Lock on page 1 User 2 Lock on page 3 Page 1 Page 2 Page 3 Page 4 Page 1 Page 2 Page 3 Page 4
Segmented table space: Table locking Table lock applies to only one table in the table space. Segment for table T1 Page 1 Page 2 Segment for table T2 Page 3 Page 4
User 1 Lock on table T1 Page locking Page lock applies to data from only one table. Segment for table T1 Page 1 Page 2
Figure 85. Page locking for simple and segmented table spaces
For information about controlling the size of locks, see: v LOCKSIZE clause of CREATE and ALTER TABLESPACE on page 842 v The LOCK TABLE statement on page 863
Effects
For maximum concurrency, locks on a small amount of data held for a short duration are better than locks on a large amount of data held for a long duration.
824
Administration Guide
...
...
... ...
However, acquiring a lock requires processor time, and holding a lock requires storage; thus, acquiring and holding one table space lock is more economical than acquiring and holding many page locks. Consider that trade-off to meet your performance and concurrency objectives. Duration of partition, table, and table space locks: Partition, table, and table space locks can be acquired when a plan is first allocated, or you can delay acquiring them until the resource they lock is first used. They can be released at the next commit point or be held until the program terminates. On the other hand, LOB table space locks are always acquired when needed and released at a commit or held until the program terminates. See LOB locks on page 864 for information about locking LOBs and LOB table spaces. Duration of page and row locks: If a page or row is locked, DB2 acquires the lock only when it is needed. When the lock is released depends on many factors, but it is rarely held beyond the next commit point. For information about controlling the duration of locks, see Bind options on page 847 for information about the ACQUIRE and RELEASE, ISOLATION, and CURRENTDATA bind options.
U (UPDATE)
825
U locks reduce the chance of deadlocks when the lock owner is reading a page or row to determine whether to change it, because the owner can start with the U lock and then promote the lock to an X lock to change the page or row. | | | | | | | X (EXCLUSIVE) The lock owner can read or change the locked page or row. A concurrent process cannot acquire S, U, or X locks on the page or row. However, a concurrent process, such as those bound with the CURRENTDATA(NO) or ISO(UR) options or running with YES specified for the EVALUNC subsystem parameter, can read the data without acquiring a page or row lock.
IX (INTENT EXCLUSIVE)
S (SHARE)
U (UPDATE)
826
Administration Guide
| | | |
process can access the data if the process runs with UR isolation or if data in a partitioned table space is running with CS isolation and CURRENTDATA((NO). The lock owner does not need page or row locks.
Compatibility for table space locks is slightly more complex. Table 136 shows whether or not table space locks of any two modes are compatible.
Table 136. Compatibility of table and table space (or partition) lock modes Lock Mode IS IX S U SIX X IS Yes Yes Yes Yes Yes No IX Yes Yes No No No No S Yes No Yes Yes No No U Yes No Yes No No No SIX Yes No No No No No X No No No No No No
827
v User data in related tables. Operations subject to referential constraints can require locks on related tables. For example, if you delete from a parent table, DB2 might delete rows from the dependent table as well. In that case, DB2 locks data in the dependent table as well as in the parent table. Similarly, operations on rows that contain LOB values might require locks on the LOB table space and possibly on LOB values within that table space. See LOB locks on page 864 for more information. If your application uses triggers, any triggered SQL statements can cause additional locks to be acquired. v DB2 internal objects. Most of these you are never aware of, but you might notice the following locks on internal objects: Portions of the DB2 catalog. For more information, see Locks on the DB2 catalog. The skeleton cursor table (SKCT) representing an application plan. The skeleton package table (SKPT) representing a package. For more information on skeleton tables, see Locks on the skeleton tables (SKCT and SKPT) on page 829. The database descriptor (DBD) representing a DB2 database. For more information, see Locks on the database descriptors (DBDs) on page 829.
828
Administration Guide
v v v v v
CREATE VIEW, SYNONYM, and ALIAS DROP VIEW and SYNONYM, and ALIAS COMMENT ON and LABEL ON GRANT and REVOKE of table privileges RENAME TABLE
v ALTER VIEW Recommendation: Reduce the concurrent use of statements that update SYSDBASE for the same table space. When you alter a table or table space, quiesce other work on that object. Contention independent of databases: The following limitations on concurrency are independent of the referenced database: v CREATE and DROP statements for a table space or index that uses a storage group contend significantly with other such statements. v CREATE, ALTER, and DROP DATABASE, and GRANT and REVOKE database privileges all contend with each other and with any other function that requires a database privilege. v CREATE, ALTER, and DROP STOGROUP contend with any SQL statements that refer to a storage group and with extensions to table spaces and indexes that use a storage group. v GRANT and REVOKE for plan, package, system, or use privileges contend with other GRANT and REVOKE statements for the same type of privilege and with data definition statements that require the same type of privilege.
| |
829
Table 137. Contention for locks on a DBD in the EDM DBD cache (continued) Process Type 4 Notes: 1. Static DML statements can conflict with other processes because of locks on data. 2. If caching of dynamic SQL is turned on, no lock is taken on the DBD when a statement is prepared for insertion in the cache or for a statement in the cache. Process Utilities Lock acquired S Conflicts with process type 3
Use the following sample steps to understand the table: 1. Find the portion of the table that describes DELETE operations using a cursor. 2. Find the row for the appropriate values of LOCKSIZE and ISOLATION. Table space DSN8810 is defined with LOCKSIZE ANY. If the value of ISOLATION was not specifically chosen, it is RR by default. 3. Find the subrow for the expected access method. The operation probably uses the index on employee number. Because the operation deletes a row, it must update the index. Hence, you can read the locks acquired in the subrow for Index, updated:
830
Administration Guide
v An IX lock on the table space v An IX lock on the table (but see the step that follows) v An X lock on the page containing the row that is deleted 4. Check the notes to the entries you use, at the end of the table. For this sample operation, see: v Note 2, on the column heading for Table. If the table is not segmented, there is no separate lock on the table. v Note 3, on the column heading for Data Page or Row. Because LOCKSIZE for the table space is ANY, DB2 can choose whether to use page locks, table locks, or table space locks. Typically it chooses page locks.
Table 138. Modes of locks acquired for SQL statements Lock mode LOCKSIZE ISOLATION Access method1 Table space9 Table2 Data page or row3
Processing statement: SELECT with read-only or ambiguous cursor, or with no cursor. UR isolation is allowed and requires none of these locks. TABLESPACE TABLE
2
CS RS RR CS RS RR CS
S IS IS
4, 10
n/a S IS
4
n/a n/a S5 S5 S5, U11, or X11 S5, U11, or X11 S5, U11, or X11 S5, U11, X11, or n/a n/a
| | | | |
RS
RR
Processing statement: INSERT ... VALUES(...) or INSERT ... fullselect TABLESPACE TABLE
2
CS RS RR CS RS RR CS RS RR
X IX IX
n/a X IX
n/a n/a X
Processing statement: UPDATE or DELETE, without cursor. Data page and row locks apply only to selected data. CS RS RR CS RS RR CS Any Any Index selection Index/data selection Table space scan PAGE, ROW, or ANY RS Index selection X IX IX IX IX IX n/a X IX IX IX IX n/a n/a v For delete: X v For update: UX UX UX v For update: S or U8 X v For delete: [SX] or X Index/data selection Table space scan IX IX IX IX S or U8 X S or U8 X
831
Table 138. Modes of locks acquired for SQL statements (continued) Lock mode LOCKSIZE PAGE, ROW, or ANY ISOLATION Access method1 RR Index selection Table space9 IX Table2 IX Data page or row3 v For update: [S or U8 X] or X v For delete: [SX] or X Index/data selection Table space scan IX IX2 or X IX X or n/a S or U8 X n/a
Processing statement: SELECT with FOR UPDATE OF. Data page and row locks apply only to selected data. TABLESPACE TABLE
2
CS RS RR CS RS RR CS
U IS or IX IX IX IX IX IX IX or X IX2 or X
RS
RR
X IX IX IX
n/a X IX IX
n/a n/a X X
Notes for Table 138 on page 831 1. All access methods are either scan-based or probe-based. Scan-based means the index or table space is scanned for successive entries or rows. Probe-based means the index is searched for an entry as opposed to a range of entries, which a scan does. ROWIDs provide data probes to look for a single data row directly. The type of lock used depends on the backup access method. Access methods may be index-only, data-only, or index-to-data. Index-only Data-only: The index alone identifies qualifying rows and the return data. The data alone identifies qualifying rows and the return data, such as a table space scan or the use of ROWID for a probe. The index is used or the index plus data are used to evaluate the predicate: v Index selection: index is used to evaluate predicate and data is used to return values. v Index/data selection: index and data are used to evaluate predicate and data is used to return values.
Index-to-data
832
Administration Guide
2. Used for segmented table spaces only. 3. These locks are taken on pages if LOCKSIZE is PAGE or on rows if LOCKSIZE is ROW. When the maximum number of locks per table space (LOCKMAX) is reached, locks escalate to a table lock for tables in a segmented table space, or to a table space lock for tables in a non-segmented table space. Using LOCKMAX 0 in CREATE or ALTER TABLESPACE disables lock escalation. 4. If the table or table space is started for read-only access, DB2 attempts to acquire an S lock. If an incompatible lock already exists, DB2 acquires the IS lock. 5. SELECT statements that do not use a cursor, or that use read-only or ambiguous cursors and are bound with CURRENTDATA(NO), might not require any lock if DB2 can determine that the data to be read is committed. This is known as lock avoidance. 6. Even if LOCKMAX is 0, the bind process can promote the lock size to TABLE or TABLESPACE. If that occurs, SQLCODE +806 is issued. 7. The locks listed are acquired on the object into which the insert is made. A subselect acquires additional locks on the objects it reads, as if for SELECT with read-only cursor or ambiguous cursor, or with no cursor. # # # # # # | | # # # # # # # # 8. An installation option determines whether the lock is S, U, or X. For a full description, see The option U LOCK FOR RR/RS on page 844. If you use the WITH clause to specify the isolation as RR or RS, you can use the USE AND KEEP UPDATE LOCKS option to obtain and hold a U lock instead of an S lock, or you can use the USE AND KEEP EXCLUSIVE LOCKS option to obtain and hold an X lock instead of an S lock. 9. Includes partition locks. Does not include LOB table space locks. See LOB locks on page 864 for information about locking LOB table spaces. 10. If the table space is partitioned, locks can be avoided on the partitions. 11. If you use the WITH clause to specify the isolation as RR or RS, you can use the USE AND KEEP UPDATE LOCKS option to obtain and hold a U lock instead of an S lock, or you can use the USE AND KEEP EXCLUSIVE LOCKS option to obtain and hold an X lock instead of an S lock. 12. When a unique index exists on the table that receives the INSERT and the value of a pseudo deleted key is the same as the value of the key to be inserted, an S lock is requested on the data that is referenced by the pseudo deleted key entry to determine whether the deletion is committed.
Lock promotion
Definition: Lock promotion is the action of exchanging one lock on a resource for a more restrictive lock on the same resource, held by the same application process. Example: An application reads data, which requires an IS lock on a table space. Based on further calculation, the application updates the same data, which requires an IX lock on the table space. The application is said to promote the table space lock from mode IS to mode IX. Effects: When promoting the lock, DB2 first waits until any incompatible locks held by other processes are released. When locks are promoted, they are promoted in the direction of increasing control over resources: from IS to IX, S, or X; from IX to SIX or X; from S to X; from U to X; and from SIX to X.
Lock escalation
Definition: Lock escalation is the act of releasing a large number of page, row or LOB locks, held by an application process on a single table or table space, to
Chapter 31. Improving concurrency
833
acquire a table or table space lock, or a set of partition locks, of mode S or X instead. When it occurs, DB2 issues message DSNI031I, which identifies the table space for which lock escalation occurred, and some information to help you identify what plan or package was running when the escalation occurred. Lock counts are always kept on a table or table space level. For an application process that is accessing LOBs, the LOB lock count on the LOB table space is maintained separately from the base table space, and lock escalation occurs separately from the base table space. | | When escalation occurs for a partitioned table space, only partitions that are currently locked are escalated. Unlocked partitions remain unlocked. After lock escalation occurs, any unlocked partitions that are subsequently accessed are locked with a gross lock. For an application process that is using Sysplex query parallelism, the lock count is maintained on a member basis, not globally across the group for the process. Thus, escalation on a table space or table by one member does not cause escalation on other members. Example: Assume that a segmented table space is defined with LOCKSIZE ANY and LOCKMAX 2000. DB2 can use page locks for a process that accesses a table in the table space and can escalate those locks. If the process attempts to lock more than 2000 pages in the table at one time, DB2 promotes its intent locks on the table to mode S or X and then releases its page locks. If the process is using Sysplex query parallelism and a table space that it accesses has a LOCKMAX value of 2000, lock escalation occurs for a member only if more than 2000 locks are acquired for that member. When it occurs: Lock escalation balances concurrency with performance by using page or row locks while a process accesses relatively few pages or rows, and then changing to table space, table, or partition locks when the process accesses many. When it occurs, lock escalation varies by table space, depending on the values of LOCKSIZE and LOCKMAX, as described in v LOCKSIZE clause of CREATE and ALTER TABLESPACE on page 842 v LOCKMAX clause of CREATE and ALTER TABLESPACE on page 843 Lock escalation is suspended during the execution of SQL statements for ALTER, CREATE, DROP, GRANT, and REVOKE. See Controlling LOB lock escalation on page 867 for information about lock escalation for LOBs. Recommendations: The DB2 statistics and performance traces can tell you how often lock escalation has occurred and whether it has caused timeouts or deadlocks. As a rough estimate, if one quarter of your lock escalations cause timeouts or deadlocks, then escalation is not effective for you. You might alter the table to increase LOCKMAX and thus decrease the number of escalations. Alternatively, if lock escalation is a problem, use LOCKMAX 0 to disable lock escalation. Example: Assume that a table space is used by transactions that require high concurrency and that a batch job updates almost every page in the table space. For
834
Administration Guide
high concurrency, you should probably create the table space with LOCKSIZE PAGE and make the batch job commit every few seconds. LOCKSIZE ANY is a possible choice, if you take other steps to avoid lock escalation. If you use LOCKSIZE ANY, specify a LOCKMAX value large enough so that locks held by transactions are not normally escalated. Also, LOCKS PER USER must be large enough so that transactions do not reach that limit. If the batch job is: v Concurrent with transactions, then it must use page or row locks and commit frequently: for example, every 100 updates. Review LOCKS PER USER to avoid exceeding the limit. The page or row locking uses significant processing time. Binding with ISOLATION(CS) may discourage lock escalation to an X table space lock for those applications that read a lot and update occasionally. However, this may not prevent lock escalation for those applications that are update intensive. v Non-concurrent with transactions, then it need not use page or row locks. The application could explicitly lock the table in exclusive mode, described under The LOCK TABLE statement on page 863.
Process Transaction with static SQL Query with dynamic SQL BIND process SQL CREATE TABLE statement SQL ALTER TABLE statement SQL ALTER TABLESPACE statement SQL DROP TABLESPACE statement SQL GRANT statement SQL REVOKE statement
Notes for Table 139: 1. In a lock trace, these locks usually appear as locks on the DBD. 2. The target table space is one of the following table spaces: v Accessed and locked by an application process v Processed by a utility v Designated in the data definition statement 3. The lock is held briefly to check EXECUTE authority. 4. If the required DBD is not already in the EDM DBD cache, locks are acquired on table space DBD01, which effectively locks the DBD. 5. For details, see Table 138 on page 831.
| |
835
6. Except while checking EXECUTE authority, IS locks on catalog tables are held until a commit point. 7. The plan or package using the SKCT or SKPT is marked invalid if a referential constraint (such as a new primary key or foreign key) is added or changed, or the AUDIT attribute is added or changed for a table. 8. The plan or package using the SKCT or SKPT is marked invalid as a result of this operation. 9. These locks are not held when ALTER TABLESPACE is changing the following options: PRIQTY, SECQTY, PCTFREE, FREEPAGE, CLOSE, and ERASE.
DEADLOK
The maximum amount of storage available for IRLM locks is limited to 90% of the total space given to the IRLM private address space during the startup procedure. The other 10% is reserved for IRLM system services, z/OS system services, and must complete processes to prevent the IRLM address space from abending, which would bring down your DB2 system. When the storage limit is reached, lock requests are rejected with an out-of-storage reason code. | | | | You can use the F irlmproc,STATUS,STOR command to monitor the amount of storage that is available for locks and the MODIFY irlmproc,SET command to dynamically change the maximum amount of IRLM private storage to use for locks.
836
Administration Guide
837
The timeout period: From the value of RESOURCE TIMEOUT and DEADLOCK TIME, DB2 calculates a timeout period. Assume that DEADLOCK TIME is 5 and RESOURCE TIMEOUT is 18. 1. Divide RESOURCE TIMEOUT by DEADLOCK TIME (18/5 = 3.6). IRLM limits the result of this division to 255. 2. Round the result to the next largest integer (Round up 3.6 to 4). 3. Multiply the DEADLOCK TIME by that integer (4 * 5 = 20). The result, the timeout period (20 seconds), is always at least as large as the value of RESOURCE TIMEOUT (18 seconds), except when the RESOURCE TIMEOUT divided by DEADLOCK TIME exceeds 255. The timeout multiplier: Requests from different types of processes wait for different multiples of the timeout period. In a data sharing environment, you can add another multiplier to those processes to wait for retained locks. In some cases, you can modify the multiplier value. Table 140 indicates the multiplier by type of process, and whether you can change it.
Table 140. Timeout multiplier by type Type IMS MPP, IMS Fast Path Message Processing, CICS, QMF, CAF, TSO batch and online, RRSAF, global transactions IMS BMPs IMS DL/I batch IMS Fast Path Non-message processing BIND subcommand processing STOP DATABASE command processing Utilities Retained locks for all types Multiplier 1 Modifiable? No
4 6 6 3 10 6 0
See UTILITY TIMEOUT on installation panel DSNTIPI on page 840 for information about modifying the utility timeout multiplier. Changing the multiplier for IMS BMP and DL/I batch: You can modify the multipliers for IMS BMP and DL/I batch by modifying the following subsystem parameters on installation panel DSNTIPI: IMS BMP TIMEOUT DL/I BATCH TIMEOUT The timeout multiplier for IMS BMP connections. A value from 1 to 254 is acceptable. The default is 4. The timeout multiplier for IMS DL/I batch connections. A value from 1 to 254 is acceptable. The default is 6.
Additional multiplier for retained locks: For data sharing, you can specify an additional timeout multiplier to be applied to the connections normal timeout multiplier. This multiplier is used when the connection is waiting for a retained lock, which is a lock held by a failed member of a data sharing group. A zero means dont wait for retained locks. See DB2 Data Sharing: Planning and Administration for more information about retained locks.
838
Administration Guide
The scanning schedule: Figure 86 illustrates the following example of scanning to detect a timeout: v DEADLOCK TIME has the default value of 5 seconds. v RESOURCE TIMEOUT was chosen to be 18 seconds. Therefore, the timeout period is 20 seconds. v A bind operation starts 4 seconds before the next scan. The operation multiplier for a bind operation is 3.
A deadlock example:
= = =
Timeout period 0 4 9 14 19 24 29 34 39 44 49 54 59 64 69
The scans proceed through the following steps: 1. A scan starts 4 seconds after the bind operation requests a lock. As determined by the DEADLOCK TIME, scans occur every 5 seconds. The first scan in the example detects that the operation is inactive. 2. IRLM allows at least one full interval of DEADLOCK TIME as a grace period for an inactive process. After that, its lock request is judged to be waiting. At 9 seconds, the second scan detects that the bind operation is waiting. 3. The bind operation continues to wait for a multiple of the timeout period. In the example, the multiplier is 3 and the timeout period is 20 seconds. The bind operation continues to wait for 60 seconds longer. 4. The scan that starts 69 seconds after the bind operation detects that the process has timed out. Effects: An operation can remain inactive for longer than the value of RESOURCE TIMEOUT. If you are in a data sharing environment, the deadlock and timeout detection process is longer than that for non-data-sharing systems. See DB2 Data Sharing: Planning and Administration for more information about global detection processing and elongation of the timeout period. Recommendation: Consider the length of inaction time when choosing your own values of DEADLOCK TIME and RESOURCE TIMEOUT.
839
The cancellation applies only to active threads. If your installation permits distributed threads to be inactive and hold no resources, those threads are allowed to remain idle indefinitely. Default: 0. That value disables the scan to time out idle threads. The threads can then remain idle indefinitely. Recommendation: If you have experienced distributed users leaving an application idle while it holds locks, pick an appropriate value other than 0 for this period. Because the scan occurs only at 3-minute intervals, your idle threads will generally remain idle for somewhat longer than the value you specify.
Maximum wait time: Because the maximum wait time for a drain lock is the same as the maximum wait time for releasing claims, you can calculate the total maximum wait time as follows: For utilities:
2 * (timeout period) * (UTILITY TIMEOUT) * (number of claim classes)
840
Administration Guide
Example: Suppose that LOAD must drain 3 claim classes, that the timeout period is 20 seconds, and that the value of UTILITY TIMEOUT is 6. Use the following calculation to determine how long the LOAD might utility be suspended before being timed out:
Maximum wait time = 2 * 20 * 6 * 3 = 720 seconds
Wait times less than maximum: The maximum drain wait time is the longest possible time a drainer can wait for a drain, not the length of time it always waits. Example: Table 141 lists the steps LOAD takes to drain the table space and the maximum amount of wait time for each step. A timeout can occur at any step. At step 1, the utility can wait 120 seconds for the repeatable read drain lock. If that lock is not available by then, the utility times out after 120 seconds. It does not wait 720 seconds.
Table 141. Maximum drain wait times: LOAD utility Step 1. Get repeatable read drain lock 2. Wait for all RR claims to be released 3. Get cursor stability read drain lock 4. Wait for all CS claims to be released 5. Get write drain lock 6. Wait for all write claims to be released Total Maximum Wait Time (seconds) 120 120 120 120 120 120 720
841
Recommendation: The default should be adequate for 90 percent of the work load when using page locks. If you use row locks on very large tables, you might want a higher value. If you use LOBs, you might need a higher value. Review application processes that require higher values to see if they can use table space locks rather than page, row, or LOB locks. The accounting trace shows the maximum number of page, row, or LOB locks a process held while running. Remember that the value specified is for a single application. Each concurrent application can potentially hold up to the maximum number of locks specified. Do not specify zero or a very large number unless it is required to run your applications.
842
Administration Guide
| |
DB2 attempts to acquire an S lock on table spaces that are started with read-only access. If the LOCKSIZE is PAGE or ROW, and DB2 cannot get the S lock, it requests an IS lock. If a partition is started with read-only access, DB2 attempts to get an S lock on the partition that is started RO. For a complete description of how the LOCKSIZE clause affects lock attributes, see DB2 choices of lock types on page 830. Default: LOCKSIZE ANY Catalog record: Column LOCKRULE of table SYSIBM.SYSTABLESPACE. Recommendation: If you do not use the default, base your choice upon the results of monitoring applications that use the table space. When considering changing the lock size for a DB2 catalog table space, be aware that, in addition to user queries, DB2 internal processes such as bind and authorization checking and utility processing can access the DB2 catalog. Row locks or page locks? The question of whether to use row or page locks depends on your data and your applications. If you are experiencing contention on data pages of a table space now defined with LOCKSIZE PAGE, consider LOCKSIZE ROW. But consider also the trade-offs. The resource required to acquire, maintain, and release a row lock is about the same as that required for a page lock. If your data has 10 rows per page, a table space scan or an index scan can require nearly 10 times as much resource for row locks as for page locks. But locking only a row at a time, rather than a page, might reduce the chance of contention with some other process by 90%, especially if access is random. (Row locking is not recommended for sequential processing.)
| |
Lock avoidance is very important when row locking is used. Therefore, use ISOLATION(CS) CURRENTDATA(NO) or ISOLATION(UR) whenever possible. In many cases, DB2 can avoid acquiring a lock when reading data that is known to be committed. Thus, if only 2 of 10 rows on a page contain uncommitted data, DB2 must lock the entire page when using page locks, but might ask for locks on only the 2 rows when using row locks. Then, the resource required for row locks would be only twice as much, not 10 times as much, as that required for page locks. On the other hand, if two applications update the same rows of a page, and not in the same sequence, then row locking might even increase contention. With page locks, the second application to access the page must wait for the first to finish and might time out. With row locks, the two applications can access the same page simultaneously, and might deadlock while trying to access the same set of rows. In short, no single answer fits all cases.
843
specifies the number of LOB locks that the application process can hold before escalating. For an application that uses Sysplex query parallelism, a lock count is maintained on each member. LOCKMAX SYSTEM Specifies that n is effectively equal to the system default set by the field LOCKS PER TABLE(SPACE) of installation panel DSNTIPJ. LOCKMAX 0 Disables lock escalation entirely. Default: The default depends on the value of LOCKSIZE, as shown in Table 142.
Table 142. How the default for LOCKMAX is determined LOCKSIZE ANY other Default for LOCKMAX SYSTEM 0
Catalog record: Column LOCKMAX of table SYSIBM.SYSTABLESPACE. Recommendations: If you do not use the default, base your choice upon the results of monitoring applications that use the table space. Aim to set the value of LOCKMAX high enough that, when lock escalation occurs, one application already holds so many locks that it significantly interferes with others. For example, if an application holds half a million locks on a table with a million rows, it probably already locks out most other applications. Yet lock escalation can prevent it from potentially acquiring another half million locks. If you alter a table space from LOCKSIZE PAGE or LOCKSIZE ANY to LOCKSIZE ROW, consider increasing LOCKMAX to allow for the increased number of locks that applications might require.
| | |
844
Administration Guide
| | | | | | | |
Table 143. Which mode of lock is held on rows or pages when you specify the SELECT using the WITH RS or WITH RR isolation clause Option Value USE AND KEEP EXCLUSIVE LOCKS USE AND KEEP UPDATE LOCKS USE AND KEEP SHARE LOCKS Lock Mode X U S
v UPDATE and DELETE, without a cursor Table 144 shows which mode of lock is held on rows or pages when you specify an update or a delete without a cursor.
Table 144. Which mode of lock is held on rows or pages when you specify an update or a delete without a cursor Option Value NO (default) YES Lock Mode S U or X
845
ISOLATION(RR) or ISOLATION(RS), DB2 acquires an X lock on all rows that fall within the range of the selection expression. Thus, a lock upgrade request is not needed for qualifying rows though the lock duration is changed from manual to commit. The lock duration change is not as costly as a lock upgrade.
846
Administration Guide
| | | | | |
Example: Suppose that an initial transaction produces a second transaction. The initial transaction passes information to the second transaction by inserting data into a table that the second transaction reads. In this case, NO should be used. Example: Suppose that you frequently modify data by deleting the data and inserting the new image of the data. In such cases that avoid UPDATE statements, the default should be used.
Bind options
The information under this heading, up to Isolation overriding with SQL statements on page 861, is General-use Programming Interface and Associated Guidance Information, as defined in Notices on page 1437. These options determine when an application process acquires and releases its locks and to what extent it isolates its actions from possible effects of other processes acting concurrently. These options of bind operations are relevant to transaction locks: v The ACQUIRE and RELEASE options v The ISOLATION option on page 850 v The CURRENTDATA option on page 857
| |
847
v The alternative to ACQUIRE(USE), ACQUIRE(ALLOCATE), gets a lock of mode IX on the table space as soon as the application starts, because that is needed if an update occurs. But most uses of the application do not update the table and so need only the less restrictive IS lock. ACQUIRE(USE) gets the IS lock when the table is first accessed, and DB2 promotes the lock to mode IX if that is needed later. v Most uses of this application do not update and do not commit. For those uses, there is little difference between RELEASE(COMMIT) and RELEASE(DEALLOCATE). But administrators might update several phone numbers in one session with the application, and the application commits after each update. In that case, RELEASE(COMMIT) releases a lock that DB2 must acquire again immediately. RELEASE(DEALLOCATE) holds the lock until the application ends, avoiding the processing needed to release and acquire the lock several times. Partition locks: Partition locks follow the same rules as table space locks, and all partitions are held for the same duration. Thus, if one package is using RELEASE(COMMIT) and another is using RELEASE(DEALLOCATE), all partitions use RELEASE(DEALLOCATE). The RELEASE option and dynamic statement caching: Generally, the RELEASE option has no effect on dynamic SQL statements with one exception. When you use the bind options RELEASE(DEALLOCATE) and KEEPDYNAMIC(YES), and your subsystem is installed with YES for field CACHE DYNAMIC SQL on installation panel DSNTIP4, DB2 retains prepared SELECT, INSERT, UPDATE, and DELETE statements in memory past commit points. For this reason, DB2 can honor the RELEASE(DEALLOCATE) option for these dynamic statements. The locks are held until deallocation, or until the commit after the prepared statement is freed from memory, in the following situations: v The application issues a PREPARE statement with the same statement identifier. v The statement is removed from memory because it has not been used. v An object that the statement is dependent on is dropped or altered, or a privilege needed by the statement is revoked. v RUNSTATS is run against an object that the statement is dependent on. If a lock is to be held past commit and it is an S, SIX, or X lock on a table space or a table in a segmented table space, DB2 sometimes demotes that lock to an intent lock (IX or IS) at commit. DB2 demotes a gross lock if it was acquired for one of the following reasons: v DB2 acquired the gross lock because of lock escalation. v The application issued a LOCK TABLE. v The application issued a mass delete (DELETE FROM ... without a WHERE clause). For partitioned table spaces, lock demotion occurs for each partition for which there is a lock. Defaults: The defaults differ for different types of bind operations, as shown in Table 145.
Table 145. Default ACQUIRE and RELEASE values for different bind options Operation BIND PLAN Default values ACQUIRE(USE) and RELEASE(COMMIT).
848
Administration Guide
Table 145. Default ACQUIRE and RELEASE values for different bind options (continued) Operation BIND PACKAGE Default values There is no option for ACQUIRE; ACQUIRE(USE) is always used. At the local server the default for RELEASE is the value used by the plan that includes the package in its package list. At a remote server the default is COMMIT. The existing values for the plan or package that is being rebound.
Recommendation: Choose a combination of values for ACQUIRE and RELEASE based on the characteristics of the particular application. The RELEASE option and DDL operations for remote requesters: When you perform DDL operations on behalf of remote requesters and RELEASE(DEALLOCATE) is in effect, be aware of the following condition. When a package that is bound with RELEASE(DEALLOCATE) accesses data at a server, it might prevent other remote requesters from performing CREATE, ALTER, DROP, GRANT, or REVOKE operations at the server. To allow those operations to complete, you can use the command STOP DDF MODE(SUSPEND). The command suspends server threads and terminates their locks so that DDL operations from remote requesters can complete. When these operations complete, you can use the command START DDF to resume the suspended server threads. However, even after the command STOP DDF MODE(SUSPEND) completes successfully, database resources might be held if DB2 is performing any activity other than inbound DB2 processing. You might have to use the command CANCEL THREAD to terminate other processing and thereby free the database resources.
849
| | |
Disadvantages: This combination reduces concurrency. It can lock resources in high demand for longer than needed. Also, the option ACQUIRE(ALLOCATE) turns off selective partition locking; if you are accessing a partitioned table space, all partitions are locked. Restriction: This combination is not allowed for BIND PACKAGE. Use this combination if processing efficiency is more important than concurrency. It is a good choice for batch jobs that would release table and table space locks only to reacquire them almost immediately. It might even improve concurrency, by allowing batch jobs to finish sooner. Generally, do not use this combination if your application contains many SQL statements that are often not executed. ACQUIRE(USE) / RELEASE(DEALLOCATE): This combination results in the most efficient use of processing time in most cases. v A table, partition, or table space used by the plan or package is locked only if it is needed while running. v All tables or table spaces are unlocked only when the plan terminates. v The least restrictive lock needed to execute each SQL statement is used, with the exception that if a more restrictive lock remains from a previous statement, that lock is used without change. Disadvantages: This combination can increase the frequency of deadlocks. Because all locks are acquired in a sequence that is predictable only in an actual run, more concurrent access delays might occur. ACQUIRE(USE) / RELEASE(COMMIT): This combination is the default combination and provides the greatest concurrency, but it requires more processing time if the application commits frequently. v A table or table space is locked only when needed. That locking is important if the process contains many SQL statements that are rarely used or statements that are intended to access data only in certain circumstances. v Table, partition, or table space locks are released at the next commit point unless the cursor is defined WITH HOLD. See The effect of WITH HOLD for a cursor on page 861 for more information.
v The least restrictive lock needed to execute each SQL statement is used except when a more restrictive lock remains from a previous statement. In that case, that lock is used without change. Disadvantages: This combination can increase the frequency of deadlocks. Because all locks are acquired in a sequence that is predictable only in an actual run, more concurrent access delays might occur. ACQUIRE(ALLOCATE) / RELEASE(COMMIT): This combination is not allowed; it results in an error message from BIND.
850
Administration Guide
condition of the application, the lock is held until the application locks the next row or page. For data that does not satisfy the search condition, the lock is immediately released. The data returned to an application that uses ISOLATION(CS) is committed, but if the application process returns to the same page, another application might have since updated or deleted the data, or might have inserted additional qualifying rows. This is especially true if DB2 returns data from a result table in a work file. For example, if DB2 has to put an answer set in a result table (such as for a sort), DB2 releases the lock immediately after it puts the row or page in the result table in the work file. Using cursor stability, the base table can change while your application is processing the result of the sort output. In some cases, DB2 can avoid taking the lock altogether, depending on the value of the CURRENTDATA bind option or the value of the EVALUATE UNCOMMITTED field on installation panel DSNTIP4. v Lock avoidance on committed data: If DB2 can determine that the data it is reading has already been committed, it can avoid taking the lock altogether. For rows that do not satisfy the search condition, this lock avoidance is possible with CURRENTDATA(YES) or CURRENTDATA(NO). For a row that satisfies the search condition on a singleton SELECT, lock avoidance is possible with CURRENTDATA(YES) or CURRENTDATA(NO). For other rows that satisfy the search condition, lock avoidance is possible only when you use the option CURRENTDATA(NO). For more details, see The CURRENTDATA option on page 857. v Lock avoidance on uncommitted data: For rows that do not satisfy the search condition, lock avoidance is possible when the value of EVALUATE UNCOMMITTED is YES. For details, see Option to avoid locks during predicate evaluation on page 846. ISOLATION(UR) Uncommitted read: The application acquires no page or row locks and can run concurrently with most other operations.6 But the application is in danger of reading data that was changed by another operation but not yet committed. A UR application can acquire LOB locks, as described in LOB locks on page 864. For restrictions on isolation UR, see Restrictions on page 855 for more information. ISOLATION(RS) Read stability: A row or page lock is held for rows or pages that are returned to an application at least until the next commit point. If a row or page is rejected during stage 2 processing, its lock is still held, even though it is not returned to the application. For multiple-row fetch, the lock is released if the stage 2 predicate fails. If the application process returns to the same page and reads the same row again, another application cannot have changed the rows, although additional qualifying rows might have been inserted by another application process. A similar situation can also occur if a row or page that is not returned to the application is updated by another application process. If the row now satisfies the search condition, it appears.
# # #
| |
6. The exceptions are mass delete operations and utility jobs that drain all claim classes. Chapter 31. Improving concurrency
851
When determining whether a row satisfies the search condition, DB2 can avoid taking the lock altogether if the row contains uncommitted data. If the row does not satisfy the predicate, lock avoidance is possible when the value of the EVALUATE UNCOMMITTED field of installation panel DSNTIP4 is YES. For details, see Option to avoid locks during predicate evaluation on page 846. ISOLATION(RR) Repeatable read: A row or page lock is held for all accessed rows, qualifying or not, at least until the next commit point. If the application process returns to the same page and reads the same row again, another application cannot have changed the rows nor have inserted any new qualifying rows. The repeatability of the read is guaranteed only until the application commits. Even if a cursor is held on a specific row or page, the result set can change after a commit. Default: The default differs for different types of bind operations, as shown in.
Table 146. The default ISOLATION values for different bind operations Operation BIND PLAN BIND PACKAGE REBIND PLAN or PACKAGE Default value ISOLATION(RR) The value used by the plan that includes the package in its package list The existing value for the plan or package being rebound
For more detailed examples, see DB2 Application Programming and SQL Guide. Recommendation: Choose an ISOLATION value based on the characteristics of the particular application.
852
Administration Guide
v For table spaces created with LOCKSIZE ROW, PAGE, or ANY, a change can occur even while executing a single SQL statement, if the statement reads the same row more than once. In the following example:
SELECT * FROM T1 WHERE COL1 = (SELECT MAX(COL1) FROM T1);
data read by the inner SELECT can be changed by another transaction before it is read by the outer SELECT. Therefore, the information returned by this query might be from a row that is no longer the one with the maximum value for COL1. v In another case, if your process reads a row and returns later to update it, that row might no longer exist or might not exist in the state that it did when your application process originally read it. That is, another application might have deleted or updated the row. If your application is doing non-cursor operations on a row under the cursor, make sure the application can tolerate not found conditions. Similarly, assume another application updates a row after you read it. If your process returns later to update it based on the value you originally read, you are, in effect, erasing the update made by the other process. If you use isolation (CS) with update, your process might need to lock out concurrent updates. One method is to declare a cursor with the FOR UPDATE clause. Product-sensitive Programming Interface For packages and plans that contain updatable static scrollable cursors, ISOLATION(CS) lets DB2 use optimistic concurrency control. DB2 can use optimistic concurrency control to shorten the amount of time that locks are held in the following situations: v Between consecutive fetch operations v Between fetch operations and subsequent positioned update or delete operations DB2 cannot use optimistic concurrency control for dynamic scrollable cursors. With dynamic scrollabe cursors, the most recently fetched row or page from the base table remains locked to maintain position for a positioned update or delete. Figure 87 on page 854 and Figure 88 on page 854 show processing of positioned update and delete operations with static scrollable cursors without optimistic concurrency control and with optimistic concurrency control.
853
Figure 87. Positioned updates and deletes without optimistic concurrency control
Figure 88. Positioned updates and deletes with optimistic concurrency control
Optimistic concurrency control consists of the following steps: 1. When the application requests a fetch operation to position the cursor on a row, DB2 locks that row, executes the FETCH, and releases the lock. 2. When the application requests a positioned update or delete operation on the row, DB2 performs the following steps: a. Locks the row. b. Reevaluates the predicate to ensure that the row still qualifies for the result table. c. For columns that are in the result table, compares current values in the row to the values of the row when step 1 was executed. Performs the positioned update or delete operation only if the values match. End of Product-sensitive Programming Interface ISOLATION (UR) Allows the application to read while acquiring few locks, at the risk of reading uncommitted data. UR isolation applies only to read-only operations: SELECT, SELECT INTO, or FETCH from a read-only result table. Reading uncommitted data introduces an element of uncertainty. Example: An application tracks the movement of work from station to station along an assembly line. As items move from one station to another, the application subtracts from the count of items at the first station and
854
Administration Guide
adds to the count of items at the second. Assume you want to query the count of items at all the stations, while the application is running concurrently. What can happen if your query reads data that the application has changed but has not committed? If the application subtracts an amount from one record before adding it to another, the query could miss the amount entirely. If the application adds first and then subtracts, the query could add the amount twice. # # # # When an application uses ISO(UR) and runs concurrently with applications that update variable-length records such that the update creates a double-overflow record, the ISO(UR) application might miss rows that are being updated. If those situations can occur and are unacceptable, do not use UR isolation. Restrictions: You cannot use UR isolation for the following types of statements: v INSERT, UPDATE, and DELETE v Any cursor defined with a FOR UPDATE clause If you bind with ISOLATION(UR) and the statement does not specify WITH RR or WITH RS, DB2 uses CS isolation for thee types of statements. When can you use uncommitted read (UR)? You can probably use UR isolation in cases like the following ones: v When errors cannot occur. Example: A reference table, like a table of descriptions of parts by part number. It is rarely updated, and reading an uncommitted update is probably no more damaging than reading the table 5 seconds earlier. Go ahead and read it with ISOLATION(UR). Example: The employee table of Spiffy Computer, our hypothetical user. For security reasons, updates can be made to the table only by members of a single department. And that department is also the only one that can query the entire table. It is easy to restrict queries to times when no updates are being made and then run with UR isolation. v When an error is acceptable. Example: Spiffy wants to do some statistical analysis on employee data. A typical question is, What is the average salary by sex within education level? Because reading an occasional uncommitted record cannot affect the averages much, UR isolation can be used. v When the data already contains inconsistent information. Example: Spiffy gets sales leads from various sources. The data is often inconsistent or wrong, and end users of the data are accustomed to dealing with that. Inconsistent access to a table of data on sales leads does not add to the problem. Do not use uncommitted read (UR) in the following cases: When the computations must balance When the answer must be accurate When you are not sure it can do no damage ISOLATION (RS) Allows the application to read the same pages or rows more than once without allowing qualifying rows to be updated or deleted by another process. It offers possibly greater concurrency than repeatable read,
Chapter 31. Improving concurrency
855
because although other applications cannot change rows that are returned to the original application, they can insert new rows or update rows that did not satisfy the original applications search condition. Only those rows or pages that satisfy the stage 1 predicate (and all rows or pages evaluated during stage 2 processing) are locked until the application commits. Figure 89 illustrates this. In the example, the rows held by locks L2 and L4 satisfy the predicate.
Application Request row Time line Lock Unlock Lock Unlock Lock L L L1 L1 L2 DB2 Lock Unlock Lock L3 L3 L4 Request next row
Figure 89. How an application using RS isolation acquires locks when no lock avoidance techniques are used. Locks L2 and L4 are held until the application commits. The other locks arent held.
Applications using read stability can leave rows or pages locked for long periods, especially in a distributed environment. If you do use read stability, plan for frequent commit points. An installation option determines the mode of lock chosen for a cursor defined with the FOR UPDATE OF clause and bound with read stability. For details, see The option U LOCK FOR RR/RS on page 844. ISOLATION (RR) Allows the application to read the same pages or rows more than once without allowing any UPDATE, INSERT, or DELETE by another process. All accessed rows or pages are locked, even if they do not satisfy the predicate. Figure 90 shows that all locks are held until the application commits. In the following example, the rows held by locks L2 and L4 satisfy the predicate.
Application Request row Time line Lock L DB2 Lock L1 Lock L2 Lock L3 Lock L4 Request next row
Figure 90. How an application using RR isolation acquires locks. All locks are held until the application commits.
856
Administration Guide
Applications that use repeatable read can leave rows or pages locked for longer periods, especially in a distributed environment, and they can claim more logical partitions than similar applications using cursor stability. Applications that use repeatable read and access a nonpartitioned index cannot run concurrently with utility operations that drain all claim classes of the nonpartitioned index, even if they are accessing different logical partitions. For example, an application bound with ISOLATION(RR) cannot update partition 1 while the LOAD utility loads data into partition 2. Concurrency is restricted because the utility needs to drain all the repeatable-read applications from the nonpartitioned index to protect the repeatability of the reads by the application. Because so many locks can be taken, lock escalation might take place. Frequent commits release the locks and can help avoid lock escalation. With repeatable read, lock promotion occurs for table space scan to prevent the insertion of rows that might qualify for the predicate. (If access is via index, DB2 locks the key range. If access is via table space scans, DB2 locks the table, partition, or table space.) An installation option determines the mode of lock chosen for a cursor defined with the FOR UPDATE OF clause and bound with repeatable read. For details, see The option U LOCK FOR RR/RS on page 844. Plans and packages that use UR isolation: Auditors and others might need to determine what plans or packages are bound with UR isolation. For queries that select that information from the catalog, see Ensuring that concurrent users access consistent data on page 295. Restrictions on concurrent access: An application using UR isolation cannot run concurrently with a utility that drains all claim classes. Also, the application must acquire the following locks: v A special mass delete lock acquired in S mode on the target table or table space. A mass delete is a DELETE statement without a WHERE clause; that operation must acquire the lock in X mode and thus cannot run concurrently. v An IX lock on any table space used in the work file database. That lock prevents dropping the table space while the application is running. v If LOB values are read, LOB locks and a lock on the LOB table space. If the LOB lock is not available because it is held by another application in an incompatible lock state, the UR reader skips the LOB and moves on to the next LOB that satisfies the query.
857
read-only when it is actually targeted by dynamic SQL for modification, youll get an error. See Problems with ambiguous cursors on page 860 for more information about ambiguous cursors. v For a request to a remote system, CURRENTDATA has an effect for ambiguous cursors using isolation levels RR, RS, or CS. For ambiguous cursors, it turns block fetching on or off. (Read-only cursors and UR isolation always use block fetch.) Turning on block fetch offers best performance, but it means the cursor is not current with the base table at the remote site. Local access: Locally, CURRENTDATA(YES) means that the data upon which the cursor is positioned cannot change while the cursor is positioned on it. If the cursor is positioned on data in a local base table or index, then the data returned with the cursor is current with the contents of that table or index. If the cursor is positioned on data in a work file, the data returned with the cursor is current only with the contents of the work file; it is not necessarily current with the contents of the underlying table or index. Figure 91 shows locking with CURRENTDATA(YES).
Application Request row or page Time line Lock Unlock Lock Unlock Lock L L L1 L1 L2 DB2 Unlock Lock Unlock Lock L2 L3 L3 L4 Request next row or page
Figure 91. How an application using CS isolation with CURRENTDATA(YES) acquires locks. This figure shows access to the base table. The L2 and L4 locks are released after DB2 moves to the next row or page. When the application commits, the last lock is released.
As with work files, if a cursor uses query parallelism, data is not necessarily current with the contents of the table or index, regardless of whether a work file is used. Therefore, for work file access or for parallelism on read-only queries, the CURRENTDATA option has no effect. If you are using parallelism but want to maintain currency with the data, you have the following options: v Disable parallelism (Use SET DEGREE = 1 or bind with DEGREE(1)). v Use isolation RR or RS (parallelism can still be used). v Use the LOCK TABLE statement (parallelism can still be used). For local access, CURRENTDATA(NO) is similar to CURRENTDATA(YES) except for the case where a cursor is accessing a base table rather than a result table in a work file. In those cases, although CURRENTDATA(YES) can guarantee that the cursor and the base table are current, CURRENTDATA(NO) makes no such guarantee. Remote access: For access to a remote table or index, CURRENTDATA(YES) turns off block fetching for ambiguous cursors. The data returned with the cursor
858
Administration Guide
is current with the contents of the remote table or index for ambiguous cursors. See Ensuring block fetch on page 1011 for information about the effect of CURRENTDATA on block fetch. Lock avoidance: With CURRENTDATA(NO), you have much greater opportunity for avoiding locks. DB2 can test to see if a row or page has committed data on it. If it has, DB2 does not have to obtain a lock on the data at all. Unlocked data is returned to the application, and the data can be changed while the cursor is positioned on the row. (For SELECT statements in which no cursor is used, such as those that return a single row, a lock is not held on the row unless you specify WITH RS or WITH RR on the statement.) To take the best advantage of this method of avoiding locks, make sure all applications that are accessing data concurrently issue COMMITs frequently. Figure 92 shows how DB2 can avoid taking locks and Table 152 summarizes the factors that influence lock avoidance.
Application Request row or page Time line Test and avoid locks DB2 Test and avoid locks Request next row or page
Figure 92. Best case of avoiding locks using CS isolation with CURRENTDATA(NO). This figure shows access to the base table. If DB2 must take a lock, then locks are released when DB2 moves to the next row or page, or when the application commits (the same as CURRENTDATA(YES)). Table 152. Lock avoidance factors. Returned data means data that satisfies the predicate. Rejected data is that which does not satisfy the predicate. Avoid locks on returned data? N/A No Avoid locks on rejected data? N/A Yes1
Isolation UR CS
NO
RS
N/A
859
Table 152. Lock avoidance factors (continued). Returned data means data that satisfies the predicate. Rejected data is that which does not satisfy the predicate. Avoid locks on returned data? No Avoid locks on rejected data? No
Isolation RR
CURRENTDATA N/A
Note: 1. Locks are avoided when the row is disqualified after stage 1 processing 2. When using ISO(RS) and multi-row fetch, DB2 releases locks that were acquired on Stage 1 qualified rows, but which subsequently failed to qualify for stage 2 predicates at the next fetch of the cursor. Problems with ambiguous cursors: As shown in Table 152 on page 859, ambiguous cursors can sometimes prevent DB2 from using lock avoidance techniques. However, misuse of an ambiguous cursor can cause your program to receive a -510 SQLCODE: v The plan or package is bound with CURRENTDATA(NO) v An OPEN CURSOR statement is performed before a dynamic DELETE WHERE CURRENT OF statement against that cursor is prepared v One of the following conditions is true for the open cursor: Lock avoidance is successfully used on that statement. Query parallelism is used. The cursor is distributed, and block fetching is used. In all cases, it is a good programming technique to eliminate the ambiguity by declaring the cursor with either the FOR FETCH ONLY or the FOR UPDATE clause.
860
Administration Guide
Table 153 shows how conflicts between isolation levels are resolved. The first column is the existing isolation level, and the remaining columns show what happens when another isolation level is requested by a new application process.
Table 153. Resolving isolation conflicts UR UR CS RS RR n/a CS RS RR CS CS n/a RS RR RS RS RS n/a RR RR RR RR RR n/a
861
SELECT MAX(BONUS), MIN(BONUS), AVG(BONUS) INTO :MAX, :MIN, :AVG FROM DSN8810.EMP WITH UR;
finds the maximum, minimum, and average bonus in the sample employee table. The statement is executed with uncommitted read isolation, regardless of the value of ISOLATION with which the plan or package containing the statement is bound. Rules for the WITH clause: The WITH clause: v Can be used on these statements: Select-statement SELECT INTO Searched delete INSERT from fullselect Searched update v Cannot be used on subqueries. v Can specify the isolation levels that specifically apply to its statement. (For example, because WITH UR applies only to read-only operations, you cannot use it on an INSERT statement.) v Overrides the isolation level for the plan or package only for the statement in which it appears. | | | | | | | | | | | | | | | | | | | | | | | | | | USE AND KEEP ... LOCKS options of the WITH clause: If you use the WITH RR or WITH RS clause, you can use the USE AND KEEP EXCLUSIVE LOCKS, USE AND KEEP UPDATE LOCKS, USE AND KEEP SHARE LOCKS options in SELECT and SELECT INTO statements. Example: To use these options, specify them as shown in the following example:
SELECT ... WITH RS USE KEEP UPDATE LOCKS;
By using one of these options, you tell DB2 to acquire and hold a specific mode of lock on all the qualified pages or rows. Table 154 shows which mode of lock is held on rows or pages when you specify the SELECT using the WITH RS or WITH RR isolation clause.
Table 154. Which mode of lock is held on rows or pages when you specify the SELECT using the WITH RS or WITH RR isolation clause Option Value USE AND KEEP EXCLUSIVE LOCKS USE AND KEEP UPDATE LOCKS USE AND KEEP SHARE LOCKS Lock Mode X U S
With read stability (RS) isolation, a row or page that is rejected during stage 2 processing might still have a lock held on it, even though it is not returned to the application. With repeatable read (RR) isolation, DB2 acquires locks on all pages or rows that fall within the range of the selection expression. All locks are held until the application commits. Although this option can reduce concurrency, it can prevent some types of deadlocks and can better serialize access to data.
862
Administration Guide
Executing the statement requests a lock immediately, unless a suitable lock exists already. The bind option RELEASE determines when locks acquired by LOCK TABLE or LOCK TABLE with the PART option are released. You can use LOCK TABLE on any table, including auxiliary tables of LOB table spaces. See The LOCK TABLE statement for LOBs on page 868 for information about locking auxiliary tables. LOCK TABLE has no effect on locks acquired at a remote server.
If EMPLOYEE_DATA is a partitioned table space, you could choose to lock individual partitions as you update them. An example is:
LOCK TABLE PERSADM1.EMPLOYEE_DATA PART 1 IN EXCLUSIVE MODE;
When the statement is executed, DB2 locks partition 1 with an X lock. The lock has no effect on locks that already exist on other partitions in the table space.
863
Table 155. Modes of locks acquired by LOCK TABLE. LOCK TABLE on partitions behave the same as nonsegmented table spaces. Nonsegmented Table Space X S or SIX Segmented Table Space Table X S or SIX Table Space IX IS
Note: The SIX lock is acquired if the process already holds an IX lock. SHARE MODE has no effect if the process already has a lock of mode SIX, U, or X.
LOB locks
The locking activity for LOBs is described separately from transaction locks because the purpose of LOB locks is different than that of regular transaction locks. A lock that is taken on a LOB value in a LOB table space is called a LOB lock. In this section, the following topics are described: v Relationship between transaction locks and LOB locks v Hierarchy of LOB locks on page 865 v LOB and LOB table space lock modes on page 865 v LOB lock and LOB table space lock duration on page 866 v Instances when LOB table space locks are not taken on page 867 v Control of the number of LOB locks on page 867 v The LOCK TABLE statement for LOBs on page 868 v LOCKSIZE clause for LOB table spaces on page 868
# # #
864
Administration Guide
In summary, the main purpose of LOB locks is for managing the space used by LOBs and to ensure that LOB readers do not read partially updated LOBs. Applications need to free held locators so that the space can be reused. Table 156 shows the relationship between the action that is occurring on the LOB value and the associated LOB table space and LOB locks that are acquired.
Table 156. Locks that are acquired for operations on LOBs. This table does not account for gross locks that can be taken because of LOCKSIZE TABLESPACE, the LOCK TABLE statement, or lock escalation. Action on LOB value Read (including UR) LOB table space lock IS LOB lock S Comment Prevents storage from being reused while the LOB is being read or while locators are referencing the LOB Prevents other processes from seeing a partial LOB To hold space in case the delete is rolled back. (The X is on the base table row or page.) Storage is not reusable until the delete is committed and no other readers of the LOB exist. Operation is a delete followed by an insert.
Insert Delete
IX IS
X S
Update
IS->IX
Two LOB locks: an S-lock for the delete and an X-lock for the insert. S X
Update the LOB to null IS or zero-length Update a null or zero-length LOB to a value IX
ISOLATION(UR) or ISOLATION(CS): When an application is reading rows using uncommitted read or lock avoidance, no page or row locks are taken on the base table. Therefore, these readers must take an S LOB lock to ensure that they are not reading a partial LOB or a LOB value that is inconsistent with the base row.
865
866
Administration Guide
more locks than a similar statement that does not involve LOB columns. To prevent system problems caused by too many locks, you can: v Ensure that you have lock escalation enabled for the LOB table spaces that are involved in the INSERT. In other words, make sure that LOCKMAX is non-zero for those LOB table spaces. v Alter the LOB table space to change the LOCKSIZE to TABLESPACE before executing the INSERT with fullselect. v Increase the LOCKMAX value on the table spaces involved and ensure that the user lock limit is sufficient. v Use LOCK TABLE statements to lock the LOB table spaces. (Locking the auxiliary table that is contained in the LOB table space locks the LOB table space.)
Controlling the number of LOB locks that are acquired for a user
LOB locks are counted toward the total number of locks allowed per user. Control this number by the value you specify on the LOCKS PER USER field of installation panel DSNTIPJ. The number of LOB locks that are acquired during a unit of work is reported in IFCID 0020.
867
When the number of LOB locks reaches the maximum you specify in the LOCKMAX clause, the LOB locks escalate to a gross lock on the LOB table space, and the LOB locks are released. Information about LOB locks and lock escalation is reported in IFCID 0020.
868
Administration Guide
v Logical partitions of nonpartitioned indexes The effects of those takeovers are described in the following sections: v Claims v Drains v Usage of drain locks on page 870 v Utility locks on the catalog and directory on page 870 v Compatibility of utilities on page 871 v Concurrency during REORG on page 872 v Utility operations with nonpartitioned indexes on page 873
Claims
Definition: A claim is a notification to DB2 that an object is being accessed. Example: When an application first accesses an object, within a unit of work, it makes a claim on the object. It releases the claim at the next commit point. Effects of a claim: Claims have the following effects: v Unlike a transaction lock, a claim normally does not persist past the commit point. To access the object in the next unit of work, the application must make a new claim. However, there is an exception. If a cursor defined with the clause WITH HOLD is positioned on the claimed object, the claim is not released at a commit point. For more about cursors defined as WITH HOLD, see The effect of WITH HOLD for a cursor on page 861. v A claim indicates to DB2 that there is activity on or interest in a particular page set or partition. Claims prevent drains from occurring until the claim is released. v Agents get a claim on data before they get a claim on an index. v For partitioned table spaces, agents get a claim at the table space level before they claim partitions or indexes. Three classes of claims: Table 157 shows the three classes of claims and the actions that they allow.
Table 157. Three classes of claims and the actions that they allow Claim class Write Repeatable read Cursor stability read Actions allowed Reading, updating, inserting, and deleting Reading only, with repeatable read (RR) isolation Reading only, with read stability (RS), cursor stability (CS), or uncommitted read (UR) isolation
# # #
| | | |
Detecting long-running read claims: DB2 issues a warning message and generates a trace record for each time period that a task holds an uncommitted read claim. You can set the length of the period in minutes by using the LRDRTHLD subsystem parameter.
Drains
Definition: A drain is the action of taking over access to an object by preventing new claims and waiting for existing claims to be released. Example: A utility can drain a partition when applications are accessing it.
869
Effects of a drain: The drain quiesces the applications by allowing each one to reach a commit point, but preventing any of them, or any other applications, from making a new claim. When no more claims exist, the process that drains (the drainer) controls access to the drained object. The applications that were drained can still hold transaction locks on the drained object, but they cannot make new claims until the drainer has finished. Claim classes drained: A drainer does not always need complete control. It could drain the following combinations of claim classes: v Only the write claim class v Only the repeatable read claim class v All claim classes Example: The CHECK INDEX utility needs to drain only writers from an index space and its associated table space. RECOVER, however, must drain all claim classes from its table space. The REORG utility can drain either writers (with DRAIN WRITERS) or all claim classes (with DRAIN ALL).
| | |
870
Administration Guide
When the target is a user-defined object, the utility claims or drains it but also uses the directory and, perhaps, the catalog; for example, to check authorization. In those cases, the utility uses transaction locks on catalog and directory tables. It acquires those locks in the same way as an SQL transaction does. For information about the SQL statements that require locks on the catalog, see Locks on the DB2 catalog on page 828. The UTSERIAL lock: Access to the SYSUTILX table space in the directory is controlled by a unique lock called UTSERIAL. A utility must acquire the UTSERIAL lock to read or write in SYSUTILX, whether SYSUTILX is the target of the utility or is used only incidentally.
Compatibility of utilities
Definition: Two utilities are considered compatible if they do not need access to the same object at the same time in incompatible modes. Compatibility rules: The concurrent operation of two utilities is not typically controlled by either drain locks or transaction locks, but merely by a set of compatibility rules. Before a utility starts, it is checked against all other utilities running on the same target object. The utility starts only if all the others are compatible. The check for compatibility obeys the following rules: v The check is made for each target object, but only for target objects. Typical utilities access one or more table spaces or indexes, but if two utility jobs use none of the same target objects, the jobs are always compatible. An exception is a case in which one utility must update a catalog or directory table space that is not the direct target of the utility. For example, the LOAD utility on a user table space updates DSNDB06.SYSCOPY. Therefore, other utilities that have DSNDB06.SYSCOPY as a target might not be compatible. v Individual data and index partitions are treated as distinct target objects. Utilities operating on different partitions in the same table or index space are compatible. v When two utilities access the same target object, their most restrictive access modes determine whether they are compatible. For example, if utility job 1 reads a table space during one phase and writes during the next, it is considered a writer. It cannot start concurrently with utility 2, which allows only readers on the table space. (Without this restriction, utility 1 might start and run concurrently with utility 2 for one phase; but then it would fail in the second phase, because it could not become a writer concurrently with utility 2.) For details on which utilities are compatible, refer to each utilitys description in DB2 Utility Guide and Reference. Figure 93 on page 872 illustrates how SQL applications and DB2 utilities can operate concurrently on separate partitions of the same table space.
871
SQL Application Allocate Write claim, P1 Commit Deallocate Write claim, P1 Commit
Time line 1
10
Time t1 t2 t3
Event An SQL application obtains a transaction lock on every partition in the table space. The duration of the locks extends until the table space is deallocated. The SQL application makes a write claim on data partition 1 and index partition 1. The LOAD jobs begin draining all claim classes on data partitions 1 and 2 and index partitions 1 and 2. LOAD on partition 2 operates concurrently with the SQL application on partition 1. LOAD on partition 1 waits. The SQL application commits, releasing its write claims on partition 1. LOAD on partition 1 can begin. LOAD on partition 2 completes. LOAD on partition 1 completes, releasing its drain locks. The SQL application (if it has not timed out) makes another write claim on data partition 1. The SQL application deallocates the table space and releases its transaction locks.
t4 t6 t7 t10
Figure 93. SQL and utility concurrency. Two LOAD jobs execute concurrently on two partitions of a table space
872
Administration Guide
873
Use the statistics trace to monitor the system-wide use of locks, the accounting trace to monitor locks used by a particular application process, and see Using the statistics and accounting traces to monitor locking on page 875. For an example of resolving a particular locking problem, see Scenario for analyzing concurrency on page 876.
| |
Note: For partitioned table spaces, the lock mode applies only to those partitions that are locked. Lock modes for LOB table spaces are not reported with EXPLAIN. For segmented table spaces with: LOCKSIZE ANY, ROW, or PAGE Table space lock acquired is: Table lock acquired is: Page or row locks acquired?
IS IS Yes
IS S No
IX IX Yes
IX X No
874
Administration Guide
Table 158. Which locks DB2 chooses (continued). N/A = Not applicable; Yes = Page or row locks are acquired; No = No page or row locks are acquired. Lock mode from EXPLAIN Table space structure LOCKSIZE TABLE Table space lock acquired is: Table lock acquired is: Page or row locks acquired? LOCKSIZE TABLESPACE Table space lock acquired is: Table lock acquired is: Page or row locks acquired? IS n/a n/a n/a n/a n/a n/a S IS S No S n/a No IX n/a n/a n/a n/a n/a n/a U IX U No U n/a No X IX X No X n/a No
| |
Figure 95 on page 876 shows a portion of the Accounting Trace, whichgives the same information for a particular application (suspensions A , timeouts B , deadlocks C , lock escalations D ). It also shows the maximum number of concurrent page locks held and acquired during the trace (E and F). Review applications with a large number to see if this value can be lowered. This number
Chapter 31. Improving concurrency
875
is the basis for the proper setting of LOCKS PER USER and, indirectly, LOCKS PER TABLE(SPACE).
LOCKING TOTAL ------------------- -------TIMEOUTS B 0 DEADLOCKS C 0 ESCAL.(SHAR) D 0 ESCAL.(EXCL) 0 MAX PG/ROW LCK HELD E 2 LOCK REQUEST F 8 UNLOCK REQST 2 QUERY REQST 0 CHANGE REQST 5 OTHER REQST 0 LOCK SUSPENS. 1 IRLM LATCH SUSPENS. 0 OTHER SUSPENS. 0 TOTAL SUSPENS. A 1 DRAIN/CLAIM -----------DRAIN REQST DRAIN FAILED CLAIM REQST CLAIM FAILED TOTAL -------0 0 4 0
| |
To determine the effect of lock suspensions on your applications, examine the class 3 LOCK/LATCH time in the Accounting Report ( C in Figure 96 on page 878).
Scenario description
An application, which has recently been moved into production, is experiencing timeouts. Other applications have not been significantly affected in this example. To investigate the problem, determine a period when the transaction is likely to time out. When that period begins: 1. Start the GTF. 2. Start the DB2 accounting classes 1, 2, and 3 to GTF to allow for the production of OMEGAMON accounting reports. 3. Stop GTF and the traces after about 15 minutes.
876
Administration Guide
4. Produce and analyze the OMEGAMON accounting report - long. 5. Use the DB2 performance trace selectively for detailed problem analysis. In some cases, the initial and detailed stages of tracing and analysis presented in this chapter can be consolidated into one. In other cases, the detailed analysis might not be required at all. To analyze the problem, generally start with the accounting report - long. (If you have enough information from program and system messages, you can skip this first step.)
Accounting report
Figure 96 on page 878 shows a portion of the accounting report - long.
877
AVERAGE
APPL(CL.1) DB2 (CL.2) IFI (CL.5) A B ------------ ---------- ---------- ---------ELAPSED TIME 0.072929 0.029443 N/P NONNESTED 0.072929 0.029443 N/A STORED PROC 0.000000 0.000000 N/A UDF 0.000000 0.000000 N/A TRIGGER 0.000000 0.000000 N/A CPU TIME AGENT NONNESTED STORED PRC UDF TRIGGER PAR.TASKS SUSPEND TIME AGENT PAR.TASKS STORED PROC UDF NOT ACCOUNT. DB2 ENT/EXIT EN/EX-STPROC EN/EX-UDF DCAPT.DESCR. LOG EXTRACT. 0.026978 0.026978 0.026978 0.000000 0.000000 0.000000 0.000000 0.000000 N/A N/A 0.000000 0.000000 N/A N/A N/A N/A N/A N/A 0.018994 0.018994 0.018994 0.000000 0.000000 0.000000 0.000000 0.010444 0.010444 0.000000 N/A N/A 0.000004 182.00 0.00 0.00 N/A N/A N/P N/A N/P N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/P N/P
AVERAGE TIME AV.EVENT C -------------------- ------------ -------LOCK/LATCH(DB2+IRLM) 0.000011 0.04 SYNCHRON. I/O 0.010170 9.16 DATABASE I/O 0.006325 8.16 LOG WRITE I/O 0.003845 1.00 OTHER READ I/O 0.000000 0.00 OTHER WRTE I/O 0.000148 0.04 SER.TASK SWTCH 0.000115 0.04 UPDATE COMMIT 0.000000 0.00 OPEN/CLOSE 0.000000 0.00 SYSLGRNG REC 0.000115 0.04 EXT/DEL/DEF 0.000000 0.00 OTHER SERVICE 0.000000 0.00 ARC.LOG(QUIES) 0.000000 0.00 ARC.LOG READ 0.000000 0.00 DRAIN LOCK 0.000000 0.00 CLAIM RELEASE 0.000000 0.00 PAGE LATCH 0.000000 0.00 NOTIFY MSGS 0.000000 0.00 GLOBAL CONTENTION 0.000000 0.00 COMMIT PH1 WRITE I/O 0.000000 0.00 ASYNCH CF REQUESTS 0.000000 0.00 TOTAL CLASS 3 0.010444 9.28
CLASS 3 SUSPENSIONS
HIGHLIGHTS
-------------------------#OCCURRENCES : 193 #ALLIEDS : 0 #ALLIEDS DISTRIB: 0 #DBATS : 193 #DBATS DISTRIB. : 0 #NO PROGRAM DATA: 0 #NORMAL TERMINAT: 193 #ABNORMAL TERMIN: 0 #CP/X PARALLEL. : 0 #IO PARALLELISM : 0 #INCREMENT. BIND: 0 #COMMITS : 193 #ROLLBACKS : 0 #SVPT REQUESTS : 0 #SVPT RELEASE : 0 #SVPT ROLLBACK : 0 MAX SQL CASC LVL: 0 UPDATE/COMMIT : 40.00 SYNCH I/O AVG. : 0.001110
GLOBAL CONTENTION L-LOCKS AVERAGE TIME AV.EVENT ------------------------------------- ------------ -------L-LOCKS 0.000000 0.00 PARENT (DB,TS,TAB,PART) 0.000000 0.00 CHILD (PAGE,ROW) 0.000000 0.00 OTHER 0.000000 0.00 SQL DML AVERAGE TOTAL -------- -------- -------SELECT 20.00 3860 INSERT 0.00 0 UPDATE 30.00 5790 DELETE 10.00 1930 DESCRIBE DESC.TBL PREPARE OPEN FETCH CLOSE DML-ALL 0.00 0.00 0.00 10.00 10.00 10.00 90.00 0 0 0 1930 1930 1930 17370 SQL DCL TOTAL -------------- -------LOCK TABLE 0 GRANT 0 REVOKE 0 SET CURR.SQLID 0 SET HOST VAR. 0 SET CUR.DEGREE 0 SET RULES 0 SET CURR.PATH 0 SET CURR.PREC. 0 CONNECT TYPE 1 0 CONNECT TYPE 2 0 SET CONNECTION 0 RELEASE 0 CALL 0 ASSOC LOCATORS 0 ALLOC CURSOR 0 HOLD LOCATOR 0 FREE LOCATOR 0 DCL-ALL 0
GLOBAL CONTENTION P-LOCKS AVERAGE TIME AV.EVENT ------------------------------------- ------------ -------P-LOCKS 0.000000 0.00 PAGESET/PARTITION 0.000000 0.00 PAGE 0.000000 0.00 OTHER 0.000000 0.00 LOCKING AVERAGE TOTAL ---------------------- -------- -------TIMEOUTS 0.00 0 DEADLOCKS 0.00 0 ESCAL.(SHARED) 0.00 0 ESCAL.(EXCLUS) 0.00 0 MAX PG/ROW LOCKS HELD 43.34 47 LOCK REQUEST 63.82 12318 UNLOCK REQUEST 14.48 2794 QUERY REQUEST 0.00 0 CHANGE REQUEST 33.35 6436 OTHER REQUEST 0.00 0 LOCK SUSPENSIONS 0.00 0 IRLM LATCH SUSPENSIONS 0.03 5 OTHER SUSPENSIONS 0.00 0 TOTAL SUSPENSIONS 0.03 5
SQL DDL CREATE DROP ALTER ---------- ------ ------ -----TABLE 0 0 0 CRT TTABLE 0 N/A N/A DCL TTABLE 0 N/A N/A AUX TABLE 0 N/A N/A INDEX 0 0 0 TABLESPACE 0 0 0 DATABASE 0 0 0 STOGROUP 0 0 0 SYNONYM 0 0 N/A VIEW 0 0 N/A ALIAS 0 0 N/A PACKAGE N/A 0 N/A PROCEDURE 0 0 0 FUNCTION 0 0 0 TRIGGER 0 0 N/A DIST TYPE 0 0 N/A SEQUENCE 0 0 0 TOTAL RENAME TBL COMMENT ON LABEL ON 0 0 0 0 0 0
The accounting report - long shows the average elapsed times and the average number of suspensions per plan execution. In Figure 96: v The class 1 average elapsed time A (AET) is 0.072929 seconds. The class 2 times show that 0.029443 seconds B of that are spent in DB2; the rest is spent in the application. v The class 2 AET is spent mostly in lock or latch suspensions (LOCK/LATCH C is 0.000011 seconds). v The HIGHLIGHTS section D of the report (upper right) shows #OCCURRENCES as 193; that is the number of accounting (IFCID 3) records.
878
Administration Guide
The lock suspension report shows: v Which plans are suspended, by plan name within primary authorization ID. For statements bound to a package, see the information about the plan that executes the package. v What IRLM requests and which lock types are causing suspensions. v Whether suspensions are normally resumed or end in timeouts or deadlocks. v What the average elapsed time (AET) per suspension is. The report also shows the reason for the suspensions, as described in Table 159.
Table 159. Reasons for suspensions Reason LOCAL LATCH GLOB. IRLMQ S.NFY OTHER Includes Contention for a local resource Contention for latches within IRLM (with brief suspension) Contention for a global resource An IRLM queued request Intersystem message sending Page latch or drain suspensions, suspensions because of incompatible retained locks in data sharing, or a value for service use
The preceding list shows only the first reason for a suspension. When the original reason is resolved, the request could remain suspended for a second reason. Each suspension results in either a normal resume, a timeout, or a deadlock. The report shows that the suspension causing the delay involves access to partition 1 of table space PARADABA.TAB1TS by plan PARALLEL. Two LOCAL suspensions time out after an average of 5 minutes, 3.278 seconds (303.278 seconds).
Lockout report
Figure 98 on page 880 shows the OMEGAMON lockout report. This report shows that plan PARALLEL contends with the plan DSNESPRR. It also shows that contention is occurring on partition 1 of table space PARADABA.TAB1TS.
879
--- L O C K R E S O U R C E --TYPE NAME TIMEOUTS DEADLOCKS --------- ----------------------- -------- --------PARTITION DB =PARADABA OB =TAB1TS PART= 1 ** LOCKOUTS FOR PARALLEL 2 ** 2 0 0
--------------- A G E N T S -------------MEMBER PLANNAME CONNECT CORRID -------- --------- -------- -----------N/P DSNESPRR TSO EOA
Lockout trace
Figure 99 shows the OMEGAMON lockout trace. For each contender, this report shows the database object, lock state (mode), and duration for each contention for a transaction lock.
. . . PRIMAUTH CORRNAME CONNTYPE ORIGAUTH CORRNMBR INSTANCE PLANNAME CONNECT -----------------------------FPB FPBPARAL TSO FPB 'BLANK' AB09C533F92E PARALLEL BATCH
EVENT SPECIFIC DATA ---------------------------------------REQUEST =LOCK UNCONDITIONAL STATE =S ZPARM INTERVAL= 300 DURATION=COMMIT INTERV.COUNTER= 1 HASH =X'000020E0' ------------ HOLDERS/WAITERS ----------HOLDER LUW='BLANK'.IPSAQ421.AB09C51F32CB MEMBER =N/P CONNECT =TSO PLANNAME=DSNESPRR CORRID=EOA DURATION=COMMIT PRIMAUTH=KARELLE STATE =X REQUEST =LOCK UNCONDITIONAL STATE =IS ZPARM INTERVAL= 300 DURATION=COMMIT INTERV.COUNTER= 1 HASH =X'000020E0' ------------ HOLDERS/WAITERS ----------HOLDER LUW='BLANK'.IPSAQ421.AB09C51F32CB MEMBER =N/P CONNECT =TSO PLANNAME=DSNESPRR CORRID=EOA DURATION=COMMIT PRIMAUTH=DAVE STATE =X ENDUSER =DAVEUSER WSNAME =DAVEWS TRANS =DAVES TRANSACTION
KARL KARL TSO 15:30:32.97267562 TIMEOUT PARTITION DB =PARADABA KARL 'BLANK' AB09C65528E6 N/P OB =TAB1TS PARALLEL TSO PART= 1
At this point in the investigation, the following information is known: v The applications that contend for resources v The page sets for which there is contention v The impact, frequency, and type of the contentions The application or data design must be reviewed to reduce the contention.
Corrective decisions
The preceding discussion is a general approach when lock suspensions are unacceptably long or timeouts occur. In such cases, the DB2 performance trace for locking and the OMEGAMON reports can be used to isolate the resource causing the suspensions. The lockout report identifies the resources involved. The lockout trace tells what contending process (agent) caused the timeout. In Figure 97 on page 879, the number of suspensions is low (only 2) and both have ended in a timeout. Rather than use the DB2 performance trace for locking, use the
880
Administration Guide
preferred option, DB2 statistics class 3 and DB2 performance trace class 1. Then produce the OMEGAMON locking timeout report to obtain the information necessary to reduce overheads. For specific information about OMEGAMON reports and their usage, see OMEGAMON Report Reference and Using IBM Tivoli OMEGAMON XE on z/OS. | | | |
881
Events take place in the following sequence: 1. LOC2A obtains a U lock on page 2 in table DEPT, to open its cursor for update. 2. LOC2B obtains a U lock on a page 8 in table PROJ, to open its cursor for update. 3. LOC2A attempts to access page 8, to open its cursor but cannot proceed because of the lock held by LOC2B. 4. LOC2B attempts to access page 2, to open its cursor but cannot proceed because of the lock held by LOC2A. DB2 selects one of the transactions and rolls it back, releasing its locks. That allows the other transaction to proceed to completion and release its locks also. Figure 100 shows the OMEGAMON Locking Trace - Deadlock report that is produced for this situation. The report shows that the only transactions involved came from plans LOC2A and LOC2B. Both transactions came in from BATCH.
. . . PRIMAUTH CORRNAME CONNTYPE ORIGAUTH CORRNMBR INSTANCE PLANNAME CONNECT -----------------------------SYSADM RUNLOC2A TSO SYSADM 'BLANK' AADD32FD8A8C LOC2A BATCH A EVENT TIMESTAMP --- L O C RELATED TIMESTAMP EVENT TYPE ----------------- -------- --------20:32:30.68850025 DEADLOCK N/P DATAPAGE K R E S O U R C E --NAME EVENT SPECIFIC DATA ----------------------- ---------------------------------------COUNTER = 2 WAITERS = 2 TSTAMP =04/02/95 20:32:30.68 DB =DSN8D42A HASH =X'01060304' OB =DEPT ---------------- BLOCKER IS HOLDER ----PAGE=X'000002' LUW='BLANK'.EGTVLU2.AADD32FD8A8C MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC2A CORRID=RUNLOC2A DURATION=MANUAL PRIMAUTH=SYSADM STATE =U ---------------- WAITER ---------------LUW='BLANK'.EGTVLU2.AA65FEDC1022 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC2B CORRID=RUNLOC2B DURATION=MANUAL PRIMAUTH=KATHY REQUEST =LOCK WORTH = 18 STATE =U HASH =X'01060312' ---------------- BLOCKER IS HOLDER ----LUW='BLANK'.EGTVLU2.AA65FEDC1022 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC2B CORRID=RUNLOC2B DURATION=MANUAL PRIMAUTH=KATHY STATE =U ---------------- WAITER -------*VICTIM*LUW='BLANK'.EGTVLU2.AADD32FD8A8C MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC2A CORRID=RUNLOC2A DURATION=MANUAL PRIMAUTH=SYSADM REQUEST =LOCK WORTH = 17 STATE =U
The lock held by transaction 1 (LOC2A) is a data page lock on the DEPT table and is held in U state. (The value of MANUAL for duration means that, if the plan was bound with isolation level CS and the page was not updated, then DB2 is free to release the lock before the next commit point.) Transaction 2 (LOC2B) was requesting a lock on the same resource, also of mode U and hence incompatible. The specifications of the lock held by transaction 2 (LOC2B) are the same. Transaction 1 was requesting an incompatible lock on the same resource. Hence, the deadlock.
882
Administration Guide
Finally, note that the entry in the trace, identified at A , is LOC2A. That is the selected thread (the victim) whose work is rolled back to let the other proceed.
Events take place in the following sequence: 1. LOC3A obtains a U lock on page 2 in DEPT, to open its cursor for update. 2. LOC3B obtains a U lock on page 8 in PROJ, to open its cursor for update. 3. LOC3C obtains a U lock on page 6 in ACT, to open its cursor for update. 4. LOC3A attempts to access page 8 in PROJ but cannot proceed because of the lock held by LOC3B. 5. LOC3B attempts to access page 6 in ACT cannot proceed because of the lock held by LOC3C. 6. LOC3C attempts to access page 2 in DEPT but cannot proceed, because of the lock held by LOC3A. DB2 rolls back LOC3C and releases its locks. That allows LOC3B to complete and release the lock on PROJ so that LOC3A can complete. LOC3C can then retry. Figure 101 on page 884 shows the OMEGAMON Locking Trace - Deadlock report produced for this situation.
883
. . . PRIMAUTH CORRNAME CONNTYPE ORIGAUTH CORRNMBR INSTANCE PLANNAME CONNECT -----------------------------SYSADM RUNLOC3C TSO SYSADM 'BLANK' AADE2CF16F34 LOC3C BATCH
EVENT TIMESTAMP --- L O C RELATED TIMESTAMP EVENT TYPE ----------------- -------- --------15:10:39.33061694 DEADLOCK N/P DATAPAGE
K R E S O U R C E --NAME EVENT SPECIFIC DATA ----------------------- ---------------------------------------COUNTER = 3 WAITERS = 3 TSTAMP =04/03/95 15:10:39.31 DB =DSN8D42A HASH =X'01060312' OB =PROJ ---------------- BLOCKER IS HOLDER-----PAGE=X'000008' LUW='BLANK'.EGTVLU2.AAD15D373533 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC3B CORRID=RUNLOC3B DURATION=MANUAL PRIMAUTH=JULIE STATE =U ---------------- WAITER ---------------LUW='BLANK'.EGTVLU2.AB33745CE357 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC3A CORRID=RUNLOC3A DURATION=MANUAL PRIMAUTH=BOB REQUEST =LOCK WORTH = 18 STATE =U ---------- BLOCKER IS HOLDER --*VICTIM*LUW='BLANK'.EGTVLU2.AAD15D373533 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC3C CORRID =RUNLOC3C DURATION=MANUAL PRIMAUTH=SYSADM STATE =U ---------------- WAITER ---------------LUW='BLANK'.EGTVLU2.AB33745CE357 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC3B CORRID =RUNLOC3B DURATION=MANUAL PRIMAUTH=JULIE REQUEST =LOCK WORTH = 18 STATE =U ---------- BLOCKER IS HOLDER ----------LUW='BLANK'.EGTVLU2.AAD15D373533 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC3A CORRID =RUNLOC3A DURATION=MANUAL PRIMAUTH=BOB STATE =U ---------------- WAITER -------*VICTIM*LUW='BLANK'.EGTVLU2.AB33745CE357 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC3C CORRID =RUNLOC3C DURATION=MANUAL PRIMAUTH=SYSADM REQUEST =LOCK WORTH = 18 STATE =U
884
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
885
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
This query might be very expensive to run, particularly if the TRANS table is a very large table with millions of rows and many columns. Now suppose that you define a materialized query table named STRANS by using the following CREATE TABLE statement:
CREATE TABLE STRANS AS (SELECT YEAR AS SYEAR, MONTH AS SMONTH, DAY AS SDAY, SUM(AMOUNT) AS SSUM FROM TRANS GROUP BY YEAR, MONTH, DAY) DATA INITIALLY DEFERRED REFRESH DEFERRED;
After you populate STRANS with a REFRESH TABLE statement, the table contains one row for each day of each month and year in the TRANS table. Using the automatic query rewrite process, DB2 can rewrite the original query into a new query. The new query uses the materialized query table STRANS instead of the original base table TRANS:
SELECT SYEAR, SUM(SSUM) FROM STRANS WHERE SYEAR >= '1995' AND SYEAR <= '2000' GROUP BY SYEAR ORDER BY SYEAR
If you maintain data currency in the materialized query table STRANS, the rewritten query provides the same results as the original query. The rewritten query offers better response time and requires less CPU time.
886
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
2. Populate materialized query tables. Refresh materialized query tables periodically to maintain data currency with base tables. However, realize that refreshing materialized query tables can be an expensive process. 3. Enable automatic query rewrite, and exploit its functions by submitting read-only dynamic queries. 4. Evaluate the effectiveness of the materialized query tables. Drop under-utilized tables, and create new tables as necessary.
The fullselect, together with the DATA INITIALLY DEFERRED clause and the REFRESH DEFERRED clause, defines the table as a materialized query table. You can explicitly specify the column names of the materialized query table or allow DB2 to derive the column names from the fullselect. The column definitions of a materialized query table are the same as those for a declared global temporary table that is defined with the same fullselect. You must include the DATA INITIALLY DEFERRED and REFRESH DEFERRED clauses when you define a materialized query table.
887
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v DATA INITIALLY DEFERRED means that DB2 does not populate the materialized query table when you create the table.You must explicitly populate the materialized query table. For system-maintained materialized query tables, populate the tables for the first time by using the REFRESH TABLE statement. For user-maintained materialized query tables, populate the table by using the LOAD utility, INSERT statement, or REFRESH TABLE statement. v REFRESH DEFERRED means that DB2 does not immediately update the data in the materialized query table when its base tables are updated. You can use the REFRESH TABLE statement at any time to update materialized query tables and maintain data currency with underlying base tables. The MAINTAINED BY SYSTEM clause, which is the default, specifies that the materialized query table is a system-maintained materialized query table. You cannot update a system-maintained materialized query table by using the LOAD utility or the INSERT, UPDATE, or DELETE statements. You can update a system-maintained materialized query table only by using the REFRESH TABLE statement. Use the MAINTAINED BY USER clause to specify that the table is a user-maintained materialized query table. You can update a user-maintained materialized query table by using the LOAD utility, the INSERT, UPDATE, and DELETE statements, as well as the REFRESH TABLE statement. The ENABLE QUERY OPTIMIZATION clause, which is the default, specifies that DB2 can consider the materialized query table in automatic query rewrite. Alternatively, you can specify DISABLE QUERY OPTIMIZATION to indicate that DB2 cannot consider the materialized query table in automatic query rewrite. When you enable query optimization, DB2 is more restrictive of what you can select in the fullselect for a materialized query table. Recommendation: When creating a user-maintained materialized query table, initially disable query optimization. Otherwise, DB2 might automatically rewrite queries to use the empty materialized query table. After you populate the user-maintained materialized query table, you can alter the table to enable query optimization. The isolation level of the materialized table is the isolation level at which the CREATE TABLE statement is executed. After you create a materialized query table, it looks and behaves like other tables in the database system, with a few exceptions. DB2 allows materialized query tables in database operations wherever it allows other tables, with a few restrictions. The restrictions are listed in the description of the CREATE TABLE statement in DB2 SQL Reference. As with any other table, you can create indexes on the materialized query table; however, the indexes that you create must not be unique. Instead, DB2 uses the materialized query tables definition to determine if it can treat the index as a unique index for query optimization. For information about using the CREATE TABLE statement to create a materialized query table, see DB2 SQL Reference.
888
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
are often referred to as summary tables. To take advantage of automatic query rewrite for existing summary tables, you must use the ALTER TABLE statement to register them as materialized query tables. Example: Assume that you have an existing summary table named TRANSCOUNT. TRANSCOUNT has four columns to track the number of transactions by account, location, and year. Assume that TRANSCOUNT was created with this CREATE TABLE statement:
CREATE TABLE (ACCTID LOCID YEAR CNT TRANSCOUNT INTEGER NOT INTEGER NOT INTEGER NOT INTEGER NOT NULL NULL NULL NULL);
The following SELECT statement then populated TRANSCOUNT with data that was derived from aggregating values in the TRANS table:
SELECT ACCTID, LOCID, YEAR, COUNT(*) FROM TRANS GROUP BY ACCTID, LOCID, YEAR ;
You could use the following ALTER TABLE statement to register TRANSCOUNT as a materialized query table. The statement specifies the ADD MATERIALIZED QUERY clause:
ALTER TABLE TRANSCOUNT ADD MATERIALIZED QUERY (SELECT ACCTID, LOCID, YEAR, COUNT(*) as cnt FROM TRANS GROUP BY ACCTID, LOCID, YEAR ) DATA INITIALLY DEFERRED REFRESH DEFERRED MAINTAINED BY USER;
The fullselect must specify the same number of columns as the table you register as a materialized query table. The columns must have the same definitions and have the same column names in the same ordinal positions. The DATA INITIALLY DEFERRED clause indicates that the table data is to remain the same when the ALTER statement completes. The MAINTAINED BY USER clause indicates that the table is user-maintained. You can continue to update the data in the table by using the LOAD utility or the INSERT, UPDATE, or DELETE statements. You can also use the REFRESH TABLE statement to update the data in the table. The table becomes immediately eligible for use in automatic query rewrite. To ensure the accuracy of data that is used in automatic query rewrite, ensure that the summary table is current before registering it as a materialized query table. Alternatively, you can follow these steps: v Register the summary table as a materialized query table with automatic query rewrite disabled. v Update the newly registered materialized query table to refresh the data. v Use the ALTER TABLE statement on the materialized query table to enable automatic query rewrite. The isolation level of the materialized query table is the isolation level at which the ALTER TABLE statement is executed. For more information about using the ALTER TABLE statement to register a base table as a materialized query table, see DB2 SQL Reference.
Chapter 32. Using materialized query tables
889
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The column definitions and the data in the table do not change. However, DB2 can no longer use the table in automatic query rewrite. You can no longer update the table with the REFRESH TABLE statement. One reason you might want to change a materialized query table into a base table is to perform table operations that are restricted for a materialized query table. Example: You might want to rotate the partitions on your partitioned materialized query table. In order to rotate the partitions, you must change your materialized query table into a base table. While the table is a base table, you can rotate the partitions. After you rotate the partitions, you can change the table back to a materialized query table. In addition to using the ALTER TABLE statement, you can change a materialized query table by dropping the table and recreating the materialized query table with a different definition.
890
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
You can also use the REFRESH TABLE statement to refresh the data in any materialized query table at any time. The REFRESH TABLE statement performs the following actions: v Deletes all the rows in the materialized query table v Executes the fullselect in the materialized query table definition to recalculate the data from the tables that are specified in the fullselect with the isolation level for the materialized query table v Inserts the calculated result into the materialized query table v Updates the DB2 catalog with a refresh timestamp and the cardinality of the materialized query table Although the REFRESH TABLE statement involves both deleting and inserting data, DB2 completes these operations in a single commit scope. Therefore, if a failure occurs during execution of the REFRESH TABLE statement, DB2 rolls back all changes that the statement made. The REFRESH TABLE statement is an explainable statement. The explain output contains rows for INSERT with the fullselect in the materialized query table definition. Recommendation:Create materialized query tables in segmented table spaces. Because the REFRESH TABLE statement triggers mass delete, the statement performs better if you put the materialized query table in a segmented table space. (See Using INSERT, UPDATE, DELETE, and LOAD for materialized query tables for information about how to refresh user-maintained materialized query tables in partitioned tables spaces more efficiently.)
Using INSERT, UPDATE, DELETE, and LOAD for materialized query tables
For a user-maintained materialized query table, you can alter the data by using the INSERT, UPDATE, and DELETE statements, and the LOAD utility. You cannot use the INSERT, UPDATE, or DELETE statements, or the LOAD utility to change system-maintained materialized query tables. Recommendation: Avoid the REFRESH TABLE statement. Because the REFRESH TABLE statement uses a fullselect to refresh a materialized query table, the statement can result in a long-running query. Therefore, using insert, update, delete, or load operations might be more efficient than using the REFRESH TABLE statement. For example, you might find it faster to generate the data for your materialized query table and execute the LOAD utility to populate the data.
891
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Depending on the size and frequency of changes in base tables, you might use different strategies to refresh your materialized query tables. For example, for infrequent, minor changes to dimension tables, you could immediately propagate the changes to the materialized query tables by using triggers. For larger or more frequent changes, you might consider refreshing your user-maintained materialized query tables incrementally to improve performance. Example: Assume that you need to add a large amount of data to a fact table. Then, you need to refresh your materialized query table to reflect the new data in the fact table. To do this, perform these steps: v Collect and stage the new data in a separate table. v Evaluate the new data and apply it to the materialized table as necessary. v Merge the new data into the fact table For an example of such code, see member DSNTEJ3M in DSN810.SDSNSAMP, which is shipped with DB2.
892
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
query table inherits the values in the security label column from the source table. However, the inherited column is not a security label column. v If more than one source table contains a security label column, DB2 returns an error code and the materialized query table is not created.
893
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The value in special register CURRENT REFRESH AGE represents a refresh age. The refresh age of a materialized query table is the time since the REFRESH TABLE statement last refreshed the table. (When you run the REFRESH TABLE statement, you update the timestamp in the REFRESH_TIME column in catalog table SYSVIEWS.) The special register CURRENT REFRESH AGE specifies the maximum refresh age that a materialized query table can have. Specifying the maximum age ensures that automatic query rewrite does not use materialized query tables with old data. In Version 8, the CURRENT REFRESH AGE has only two values: 0 or ANY. A value of 0 means that DB2 will consider no materialized query tables in automatic query rewrite. A value of ANY means that DB2 will consider all materialized query tables in automatic query rewrite. The refresh age of a user-maintained materialized query table might not truly represent the freshness of the data in the table. In addition to the REFRESH TABLE statement, user-maintained query tables can be updated with the INSERT, UPDATE, and DELETE statements and the LOAD utility. Therefore, you can use the CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION special register to determine which type of materialized query tables, system-maintained or user-maintained, are considered in automatic query rewrite. The special register has four possible values that indicate which materialized query tables DB2 considers for automatic query rewrite: SYSTEM USER ALL NONE DB2 considers only system-maintained materialized query tables. DB2 considers only user-maintained materialized query tables. DB2 considers both types of materialized query tables. DB2 considers no materialized query tables.
The CURRENT REFRESH AGE and CURRENT MAINT TYPES fields on installation panel DSNTIP4 determine the initial values of the registers. The default value for the CURRENT REFRESH AGE field is 0, and the default value for CURRENT MAINT TYPES is SYSTEM. If your DB2 subsystem is used exclusively for data warehousing, your applications might tolerate data that is not current. If so, set the parameters on the installation panel to enable automatic query rewrite for both system-maintained and user-maintained materialized query tables by default. Table 160 summarizes how to use the CURRENT REFRESH AGE and CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION special registers together. The table shows which materialized query tables DB2 considers in automatic query rewrite.
| Table 160. The relationship between CURRENT REFRESH AGE and CURRENT MAINTAINED TABLE TYPES FOR | OPTIMIZATION special registers | Value of CURRENT | | REFRESH AGE | ANY | | | | | | |
0 Value of CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION SYSTEM All system-maintained materialized query tables None USER All user-maintained materialized query tables None ALL None
All materialized query None tables (both system-maintained and user-maintained) None None
You must populate a system-maintained materialized query table before DB2 considers it in automatic query rewrite.
894
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
895
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
However, the materialized query table fullselect must not have any local predicates that reference this extra table. For an example of a lossless predicate, see Example 2 under Examples of automatic query rewrite. Referential constraints on the source tables are very important in determining whether automatic query rewrite uses a materialized query table. Predicates are much more likely to match if you code the predicate in the user query so that it is the same or very similar to the predicate in the materialized query table fullselect. Otherwise, the matching process might fail on some complex predicates. For example, the matching process between the simple equal predicates such as COL1 = COL2 and COL2 = COL1 succeeds. Furthermore, the matching process between simple equal predicates such as COL1 * (COL2 + COL3) = COL5 and COL5 = (COL3 + COL2) * COL1 succeeds. However, the matching process between equal predicates such as (COL1 + 3) * 10 = COL2 and COL1 * 10 + 30 = COL2 fails. The items in an IN-list predicate do not need to be in exactly the same order for predicate matching to succeed. v DB2 compares GROUP BY clauses in the user query to GROUP BY clauses in the materialized query table fullselect. If the user query requests data at the same or higher grouping level as the data in the materialized query table fullselect, the materialized query table remains a candidate for query rewrite. DB2 uses functional dependency information and column equivalence in this analysis. v DB2 compares the columns that are requested by the user query with the columns in the materialized query table. If DB2 can derive the result columns from one or more columns in the materialized query table, the materialized query table remains a candidate for query rewrite. DB2 uses functional dependency information and column equivalence in this analysis. v DB2 examines predicates in the user query that are not identical to predicates in the materialized query table fullselect. Then, DB2 determines if it can derive references to columns in the base table from columns in the materialized query table instead. If DB2 can derive the result columns from the materialized query table, the materialized query table remains a candidate for query rewrite. If all of the preceding analyses succeed, DB2 rewrites the user query. DB2 replaces all or some of the references to base tables with references to the materialized query table. If DB2 finds several materialized query tables that it can use to rewrite the query, it might use multiple tables simultaneously. If DB2 cannot use the tables simultaneously, it uses heuristic rules to choose which one to use. After DB2 writes the new query, DB2 determines the cost and the access path of that query. DB2 uses the rewritten query if the estimated cost of the rewritten query is less than the estimated cost of the original query. The rewritten query might give only approximate results if the data in the materialized query table is not up to date.
896
Administration Guide
Product dimension
PGROUP ID LINEID NAME 1:N PLINE ID NAME 1:N
Location dimension
LOC ID CITY STATE COUNTRY TRANSITEM ID N:1 TRANS ID LOCID YEAR MONTH DAY ACCTID N:1
1:N
Time dimension
Account dimension
| | | | | | | | | | | | | | | | | | | |
Figure 102. Multi-fact star schema. In this simplified credit card application, the fact tables TRANSITEM and TRANS form the hub of the star schema. The schema also contains four dimensions: product, location, account, and time.
The data warehouse records transactions that are made with credit cards. Each transaction consists of a set of items that are purchased together. At the center of the data warehouse are two large fact tables. TRANS records the set of credit card purchase transactions. TRANSITEM records the information about the items that are purchased. Together, these two fact tables are the hub of the star schema. The star schema is a multi-fact star schema because it contains these two fact tables. The fact tables are continuously updated for each new credit card transaction. In addition to the two fact tables, the schema contains four dimensions that describe transactions: product, location, account, and time. v The product dimension consists of two normalized tables, PGROUP and PLINE, that represent the product group and product line. v The location dimension consists of a single, denormalized table, LOC, that contains city, state, and country. v The account dimension consists of two normalized tables, ACCT and CUST, that represent the account and the customer. v The time dimension consists of the TRANS table that contains day, month, and year.
Chapter 32. Using materialized query tables
897
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Analysts of such a credit card application are often interested in the aggregation of the sales data. Their queries typically perform joins of one or more dimension tables with fact tables. The fact tables contain significantly more rows than the dimension tables, and complicated queries that involve large fact tables can be very costly. In many cases, you can use materialized query table to summarize and store information from the fact tables. Using materialized query tables can help you avoid costly aggregations and joins against large fact tables. Example 1: An analyst submits the following query to count the number of transactions that are made in the United States for each credit card. The analyst requests the results grouped by credit card account, state, and year:
UserQ1 -----SELECT T.ACCTID, L.STATE, T.YEAR, COUNT(*) AS CNT FROM TRANS T, LOC L WHERE T.LOCID = L.ID AND L.COUNTRY = 'USA' GROUP BY T.ACCTID, L.STATE, T.YEAR;
Assume that the following CREATE TABLE statement created a materialized query table named TRANSCNT:
CREATE TABLE TRANSCNT AS (SELECT ACCTID, LOCID, YEAR, COUNT(*) AS CNT FROM TRANS GROUP BY ACCTID, LOCID, YEAR ) DATA INITIALLY DEFERRED REFRESH DEFERRED;
If you enable automatic query rewrite, DB2 can rewrite UserQ1 as NewQ1. NewQ1 accesses the TRANSCNT materialized query table instead of the TRANS fact table.
NewQ1 ----SELECT A.ACCTID, L.STATE, A.YEAR, SUM(A.CNT) AS CNT FROM TRANSCNT A, LOC L WHERE A.LOCID = L.ID AND L.COUNTRY = 'USA' GROUP BY A.ACCTID, L.STATE, A.YEAR;
DB2 can use query rewrite in this case because of the following reasons: v The TRANS table is common to both UserQ1 and TRANSCNT. v DB2 can derive the columns of the query result from TRANSCNT. v The GROUP BY in the query requests data that are grouped at a higher level than the level in the definition of TRANSCNT. Because customers typically make several hundred transactions per year with most of them in the same city, TRANSCNT is about hundred times smaller than TRANS. Therefore, rewriting UserQ1 into a query that uses TRANSCNT instead of TRANS improves response time significantly. Example 2: Assume that an analyst wants to find the number of televisions, with a price over 100 and a discount greater than 0.1, that were purchased by each credit card account. The analyst submits the following query:
UserQ2 -----SELECT T.ID, TI.QUANTITY * TI.PRICE * (1 - TI.DISCOUNT) AS AMT FROM TRANSITEM TI, TRANS T, PGROUP PG WHERE TI.TRANSID = T.ID AND
898
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
TI.PGID = PG.ID TI.PRICE > 100 TI.DISCOUNT > 0.1 PG.NAME = 'TV';
If you define the following materialized query table TRANSIAB, DB2 can rewrite UserQ2 as NewQ2:
TRANSIAB -------CREATE TABLE TRANSIAB AS (SELECT TI.TRANSID, TI.PRICE, TI.DISCOUNT, TI.PGID, L.COUNTRY, TI.PRICE * TI.QUANTITY as VALUE FROM TRANSITEM TI, TRANS T, LOC L WHERE TI.TRANSID = T.ID AND T.LOCID = L.ID AND TI.PRICE > 1 AND TI.DISCOUNT > 0.1) DATA INITIALLY DEFERRED REFRESH DEFERRED; NewQ2 ----SELECT A.TRANSID, A.VALUE * (1 - A.DISCOUNT) as AM FROM TRANSIAB A, PGROUP PG WHERE A.PGID = PG.ID AND A.PRICE > 100 AND PG.NAME = 'TV';
DB2 can rewrite UserQ2 as a new query that uses materialized query table TRANSIAB because of the following reasons: v Although the predicate T.LOCID = L.ID appears only in the materialized query table, it does not result in rows that DB2 might discard. The referential constraint between the TRANS.LOCID and LOC.ID columns makes the join between TRANS and LOC in the materialized query table definition lossless. The join is lossless only if the foreign key in the constraint is NOT NULL. v The predicates TI.TRANSID = T.ID and TI.DISCOUNT > 0.1 appear in both the user query and the TRANSIAB fullselect. v The fullselect predicate TI.PRICE >1 in TRANSIAB subsumes the user query predicate TI.PRICE > 100 in UserQ2. Because the fullselect predicate is more inclusive than the user query predicate, DB2 can compute the user query predicate from TRANSIAB. v The user query predicate PG.NAME = TV refers to a table that is not in the TRANSIAB fullselect. However, DB2 can compute the predicate from the PGROUP table. A predicate like PG.NAME=TV does not disqualify other predicates in a query from qualifying for automatic query rewrite. In this case PGROUP is a relatively small dimension table, so a predicate that refers to the table is not overly costly. v DB2 can derive the query result from the materialized query table definition, even when the derivation is not readily apparent: DB2 derives T.ID in the query from T.TRANSID in the TRANSIAB fullselect. Although these two columns originate from different tables, they are equivalent because of the predicate T.TRANSID = T.ID. DB2 recognizes such column equivalency through join predicates. Thus, DB2 derives T.ID from T.TRANSID, and the query qualifies for automatic query rewrite. DB2 derives AMT in the query UserQ2 from DISCOUNT and VALUE in the TRANSIAB fullselect.
899
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example 3: This example shows how DB2 matches GROUP BY items and aggregate functions between the user query and the materialized query table fullselect. Assume that an analyst submits the following query to find the average value of the transaction items for each year:
UserQ3 -----SELECT YEAR, AVG(QUANTITY * PRICE) AS AVGVAL FROM TRANSITEM TI, TRANS T WHERE TI.TRANSID = T.ID GROUP BY YEAR;
If you define the following materialized query table TRANSAVG, DB2 can rewrite UserQ3 as NewQ3:
TRANSAVG -------CREATE TABLE TRANSAVG AS (SELECT T.YEAR, T.MONTH, SUM(QUANTITY * PRICE) AS TOTVAL, COUNT(*) AS CNT FROM TRANSITEM TI, TRANS T WHERE TI.TRANSID = T.ID GROUP BY T.YEAR, T.MONTH ) DATA INITIALLY DEFERRED REFRESH DEFERRED; NewQ3 ----SELECT YEAR, CASE WHEN SUM(CNT) = 0 THEN NULL ELSE SUM(TOTVAL)/SUM(CNT) END AS AVGVAL FROM TRANSAVG GROUP BY YEAR;
DB2 can rewrite UserQ3 as a new query that uses materialized query table TRANSAVG because of the following reasons: v DB2 considers YEAR in the user query and YEAR in the materialized query table fullselect to match exactly. v DB2 can derive the AVG function in the user query from the SUM function and the COUNT function in the materialized query table fullselect. v The GROUP BY in the query NewQ3 requests data at a higher level than the level in the definition of TRANSAVG. v DB2 can compute the yearly average in the user query by using the monthly sums and counts of transaction items in TRANSAVG. DB2 derives the yearly averages from the CNT and TOTVAL columns of the materialized query table by using a case expression.
If DB2 rewrites the query to use a materialized query table, a portion of the plan table output might look like Table 161 on page 901.
900
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 161. Plan table output for an example with a materialized query table PLANNO 1 2 METHOD 0 3 TNAME TRANSAVG 2 JOIN_TYPE TABLE_TYPE M ?
The value M in TABLE_TYPE indicates that DB2 used a materialized query table. TNAME shows that DB2 used the materialized query table named TRANSAVG.You can also obtain this information from a performance trace (IFCID 0022).
901
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
902
Administration Guide
v Modeling your production system on page 927 Other considerations for access path selection: In addition to considering catalog statistics, database design, and the SQL statement, the optimizer also considers the central processor model, number of central processors, buffer pool size, and RID pool size. The number of processors is used only when determining degrees of parallelism. Access path selection uses buffer pool statistics for several calculations. Access path selection also considers the central processor model. These two factors can change your queries access paths from one system to another, even if all the catalog statistics are identical. You should keep this in mind when migrating from a test system to a production system, or when modeling a new application. Mixed central processor models in a data sharing group can also affect access path selection. For more information on data sharing, see DB2 Data Sharing: Planning and Administration.
Column name
Description
903
Table 162. Catalog data used for access path selection or collected by RUNSTATS (continued) Set by RUNSTATS? Yes User can update? Yes Used for access paths? 1 No
Description If updated most recently by RUNSTATS, the date and time of that update, not updatable in SYSINDEXPART and SYSTABLEPART. Used for access path selection for SYSCOLDIST if duplicate column values exist for the same column (by user insertion).
SYSIBM.SYSCOLDIST CARDF COLGROUPCOLNO Yes Yes Yes Yes Yes Yes The number of distinct values for the column group, -1 if TYPE is F The set of columns associated with the statistics. Contains an empty string if NUMCOLUMNS = 1. Frequently occurring value in the distribution A number which, multiplied by 100, gives the percentage of rows that contain the value of COLVALUE. For example, 1 means 100% of the rows contain the value and .15 indicates that 15% of the rows contain the value. The number of columns associated with the statistics. The default value is 1. The type of statistics gathered, either cardinality (C) or frequent value (F)
COLVALUE FREQUENCYF
Yes Yes
Yes Yes
Yes Yes
NUMCOLUMNS TYPE
Yes Yes
Yes Yes
Yes Yes
SYSIBM.SYSCOLDISTSTATS: contains statistics by partition CARDF COLGROUPCOLNO COLVALUE FREQUENCYF Yes Yes Yes Yes Yes Yes Yes Yes No No No No The number of distinct values for the column group, -1 if TYPE is F The set of columns associated with the statistics Frequently occurring value in the distribution A number which, multiplied by 100, gives the percentage of rows that contain the value of COLVALUE. For example, 1 means 100% of the rows contain the value and .15 indicates that 15% of the rows contain the value. The internal representation of the estimate of the number of distinct values in the partition. The number of columns associated with the statistics. The default value is 1. The type of statistics gathered, either cardinality (C) or frequent value (F)
| KEYCARDDATA
Yes
Yes
No
NUMCOLUMNS TYPE
Yes Yes
Yes Yes
No No
904
Administration Guide
Table 162. Catalog data used for access path selection or collected by RUNSTATS (continued) Set by RUNSTATS? Yes User can update? Yes Used for access paths? 1 No
Description The number of distinct values in the partition. Do not update this column manually without first updating COLCARDDATA to a value of length 0. The internal representation of the estimate of the number of distinct values in the partition. A value appears here only if RUNSTATS TABLESPACE is run on the partition. Otherwise, this column contains a string of length 0, indicating that the actual value is in COLCARD. First 2000 bytes of the highest value of the column within the partition. Blank if LOB column. First 2000 bytes of the second highest value of the column within the partition. Blank if LOB column. First 2000 bytes of the lowest value of the column within the partition. Blank if LOB column. First 2000 bytes of the second lowest value of the column within the partition. Blank if LOB column.
COLCARDDATA
Yes
Yes
No
Yes
Yes
No
Yes
Yes
No
Yes
Yes
No
Yes
Yes
No
Yes
Yes
Yes
Estimated number of distinct values in the column, -1 to trigger DB2s use of the default value (25) and -2 for the first column of an index of an auxiliary table First 2000 bytes of the second highest value in this column. Blank for auxiliary index. First 2000 bytes of the second lowest value in this column. Blank for auxiliary index.
| HIGH2KEY | | LOW2KEY |
SYSIBM.SYSINDEXES
Yes
Yes
Yes
Yes
Yes
Yes
| AVGKEYLEN
CLUSTERED CLUSTERING
Yes Yes No
Yes Yes No
No No Yes
Average key length Whether the table is actually clustered by the index. Blank for auxiliary index. Whether the index was created using CLUSTER
905
Table 162. Catalog data used for access path selection or collected by RUNSTATS (continued) Set by RUNSTATS? Yes User can update? Yes Used for access paths? 1 Yes
Description A number which, when multiplied by 100, gives the percentage of rows in clustering order. For example, 1 indicates that all rows are in clustering order and .87825 indicates that 87.825% of the rows are in clustering order. For a partitioned index, it is the weighted average of all index partitions in terms of the number of rows in the partition. For an auxiliary index it is -2. If this columns contains the default, 0, DB2 uses the value in CLUSTERRATIO, a percentage, for access path selection. Number of distinct values of the first key column, or an estimate if updated while collecting statistics on a single partition, -1 to trigger DB2s use of the default value (25) Number of distinct values of the full key, -1 to trigger DB2s use of the default value (25) Number of active leaf pages in the index, -1 to trigger DB2s use of the default value (SYSTABLES.CARD/300) Number of levels in the index tree, -1 to trigger DB2s use of the default value (2) Disk storage in KB
FIRSTKEYCARDF
Yes
Yes
Yes
FULLKEYCARDF
Yes
Yes
Yes
NLEAF
Yes
Yes
Yes
NLEVELS SPACEF
Yes Yes
Yes Yes
Yes No
| AVGKEYLEN
CARDF DSNUM EXTENTS
No No No No
Average key length. Number of rows or LOBs referenced by the index or partition Number of data sets Number of data set extents (when there are multiple pieces, the value is for the extents in the last data set) Number of referenced rows far from the optimal position because of an insert into a full page 100 times the number of pages between successive leaf pages. Number of leaf pages located physically far away from previous leaf pages for successive active leaf pages accessed in an index scan. See LEAFNEAR and LEAFFAR columns on page 925 for more information.
FAROFFPOSF
Yes
No
No
LEAFDIST LEAFFAR
Yes Yes
No Yes
No No
906
Administration Guide
Table 162. Catalog data used for access path selection or collected by RUNSTATS (continued) Set by RUNSTATS? Yes User can update? Yes Used for access paths? 1 No
Description Number of leaf pages located physically near previous leaf pages for successive active leaf pages. See LEAFNEAR and LEAFFAR columns on page 925 for more information. The limit key of the partition in an internal format, 0 if the index is not partitioned Number of referenced rows near but not at the optimal position because of an insert into a full page The primary space allocation in 4K blocks for the data set Number of pseudo deleted keys Secondary space allocation in units of 4 KB, stored in integer format instead of small integer format supported by SQTY. If a storage group is not used, the value is 0. The number of KB of space currently allocated for all extents (contains the accumulated space used by all pieces if a page set contains multiple pieces) The secondary space allocation in 4 KB blocks for the data set Disk storage in KB
LIMITKEY NEAROFFPOSF
No Yes
No No
Yes No
No Yes No
No No No
SPACE
Yes
No
No
SQTY SPACEF
Yes Yes
No Yes
No No
SYSIBM.SYSINDEXSTATS: contains statistics by partition CLUSTERRATIOF Yes Yes No A number which, when multiplied by 100, gives the percentage of rows in clustering order. For example, 1 indicates that all rows are in clustering order and .87825 indicates that 87.825% of the rows are in clustering order. Number of distinct values of the first key column, or an estimate if updated while collecting statistics on a single partition The internal representation of the number of distinct values of the full key Number of distinct values of the full key Number of rows in the partition, -1 to trigger DB2s use of the value in KEYCOUNT Number of leaf pages in the index Number of levels in the index tree
FIRSTKEYCARDF
Yes
Yes
No
| FULLKEYCARDDATA
FULLKEYCARDF KEYCOUNTF
No No No
NLEAF NLEVELS
Yes Yes
Yes Yes
No No
SYSIBM.SYSLOBSTATS: contains LOB table space statistics AVGSIZE FREESPACE Yes Yes Yes Yes No No Average size of a LOB in bytes The number of KB of available space in the LOB table space
907
Table 162. Catalog data used for access path selection or collected by RUNSTATS (continued) Set by RUNSTATS? Yes User can update? Yes Used for access paths? 1 No
Column name
Description The percentage of organization in the LOB table space. A value of 100 indicates perfect organization of the LOB table space. A value of 1 indicates that the LOB table space is disorganized. A value of 0.00 indicates that the LOB table space is totally disorganized. An empty table space has an ORGRATIO value of 100.00.
# ORGRATIO # # # # # # # #
SYSIBM.SYSROUTINES: Contains statistics for table functions. See Updating catalog statistics on page 804 for more information about using these statistics. CARDINALITY No Yes Yes The predicted cardinality of a table function, -1 to trigger DB2s use of the default value (10 000) Estimated number of instructions executed the first and last time the function is invoked, -1 to trigger DB2s use of the default value (40 000) Estimated number of IOs performed the first and last time the function is invoked, -1 to trigger DB2s use of the default value (0) Estimated number of instructions per invocation, -1 to trigger DB2s use of the default value (4 000) Estimated number of IOs per invocation, -1 to trigger DB2s use of the default value (0)
INITIAL_INSTS
No
Yes
Yes
INITIAL_IOS
No
Yes
Yes
INSTS_PER_INVOC
No
Yes
Yes
IOS_PER_INVOC
No
Yes
Yes
| AVGROWLEN
CARDF
Yes Yes
No No
No No
Average row length Total number of rows in the table space or partition. For LOB table spaces, the number of LOBs in the table space. Number of data sets Number of data set extents (when there are multiple pieces, the value is for the extents in the last data set) Number of rows relocated far from their original page Number of rows relocated near their original page Percentage of pages, times 100, saved in the table space or partition as a result of using data compression Percentage of space occupied by active rows, containing actual data from active tables, -2 for LOB table spaces
DSNUM EXTENTS
Yes Yes
Yes Yes
No No
No No No
No No No
PERCACTIVE
Yes
No
No
908
Administration Guide
Table 162. Catalog data used for access path selection or collected by RUNSTATS (continued) Set by RUNSTATS? Yes User can update? No Used for access paths? 1 No
Description For nonsegmented table spaces, the percentage of space occupied by rows of data from dropped tables; for segmented table spaces, 0 The primary space allocation in 4K blocks for the data set Secondary space allocation in units of 4 KB, stored in integer format instead of small integer format supported by SQTY. If a storage group is not used, the value is 0. The number of KB of space currently allocated for all extents (contains the accumulated space used by all pieces if a page set contains multiple pieces) Disk storage in KB The secondary space allocation in 4K blocks for the data set
PQTY SECQTYI
Yes Yes
No No
No No
SPACE
Yes
No
No
Yes Yes
Yes No
No No
Yes Yes
Yes Yes
No Yes
Average row length of the table specified in the table space Total number of rows in the table or total number of LOBs in an auxiliary table, -1 to trigger DB2s use of the default value (10 000) Nonblank value if an edit exit routine is used Total number of pages on which rows of this table appear, -1 to trigger DB2s use of the default value (CEILING(1 + CARD/20)) Number of pages used by the table For nonsegmented table spaces, percentage of total pages of the table space that contain rows of the table; for segmented table spaces, the percentage of total pages in the set of segments assigned to the table that contain rows of the table Percentage of rows compressed within the total number of active rows in the table Disk storage in KB
EDPROC NPAGES
No Yes
No Yes
Yes Yes
NPAGESF PCTPAGES
Yes Yes
Yes Yes
Yes No
PCTROWCOMP SPACEF
Yes Yes
Yes Yes
Yes No
SYSIBM.SYSTABLESPACE:
# | AVGROWLEN
Yes
No
No
909
Table 162. Catalog data used for access path selection or collected by RUNSTATS (continued) Set by RUNSTATS? Yes User can update? Yes Used for access paths? 1 Yes
Description Number of active pages in the table space, the number of pages touched if a cursor is used to scan the entire file, 0 to trigger DB2s use of the value in the NACTIVE column instead. If NACTIVE contains 0, DB2 uses the default value (CEILING(1 + CARD/20)). Disk storage in KB Disk storage in KB
SPACE
Yes Yes
No Yes
No No
| SPACEF
CARDF
SYSIBM.SYSTABSTATS: contains statistics by partition Yes Yes Yes Total number of rows in the partition, -1 to trigger DB2s use of the value in the CARD column. If CARD is -1, DB2 uses a default value(10 000) Number of active pages in the partition Total number of pages on which rows of the partition appear, -1 to trigger DB2s use of the default value (CEILING(1 + CARD/20)) Percentage of total active pages in the partition that contain rows of the table Percentage of rows compressed within the total number of active rows in the partition, -1 to trigger DB2s use of the default value (0)
NACTIVE NPAGES
Yes Yes
Yes Yes
No Yes
PCTPAGES PCTROWCOMP
Yes Yes
Yes Yes
No No
Note: 1 Statistics on LOB-related values are not used for access path selection. SYSCOLDISTSTATS and SYSINDEXSTATS are not used for parallelism access paths. SYSCOLSTATS information (CARD, HIGHKEY, LOWKEY, HIGH2KEY, and LOW2KEY) is used to determine the degree of parallelism.
| | |
910
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
When frequency statistics do not exist, DB2 assumes that the data is uniformly distributed and all values in the column occur with the same frequency. This assumption can lead to an inaccurate estimate of the number of qualifying rows if the data is skewed, which can result in performance problems. Example: Assume that a column (AGE_CATEGORY) contains five distinct values (COLCARDF), each of which occur with the following frequencies:
AGE_CATEGORY -----------INFANT CHILD ADOLESCENT ADULT SENIOR FREQUENCY --------5% 15% 25% 40% 15%
Without this frequency information, DB2 would use a default filter factor of 1/5 (1/COLCARDF), or 20%, to estimate the number of rows that qualify for predicate AGE_CATEGORY=ADULT. However, the actual frequency of that age category is 40%. Thus, the number of qualifying rows is underestimated by 50%. When collecting statistics about indexes, you can specify the KEYCARD option of RUNSTATS to collect cardinality statistics on the specified indexes. You can also specify the FREQVAL option with KEYCARD to specify whether distribution statistics are collected for what number of concatenated index columns. By default, distribution statistics are collected on the first column of each index for the 10 most frequently occurring values. FIRSTKEYCARDF and FULLKEYCARDF are also collected by default. The value of FULLKEYCARDF generated by RUNSTATS on a DPSI type index is an estimate determined by a sampling method. If you know a more accurate number for FULLKEYCARDF, you can supply it by updating the catalog. When collecting statistics at the table level, you can specify the COLUMN option of RUNSTATS to collect cardinality statistics on just the specified columns. You can also use the COLGROUP option to specify a group of columns for which to collect cardinality statistics. If you use the FREQVAL option with COLGROUP, you can also collect distribution statistics for the column group. To limit the resources required to collect statistics, you only need to collect column cardinality and frequency statistics that have changed. For example, a column on GENDER is likely to have a COLCARDF of 2, with M and F as the possible values. It is unlikely that the cardinality for this column will ever change. The distribution of the values in the column may or may not change often, depending on the volatility of the data. Recommendation: If query performance is not satisfactory, consider the following actions: Collect cardinality statistics on all columns that are used as predicates in a WHERE clause. Collect frequencies for all columns with a low cardinality that are used as COL op literal predicates. Collect frequencies for a column when the column can contain default data, the default data is skewed, and the column is used as a COL op literal predicate. Collect KEYCARD on all candidate indexes. Collect column group statistics on all join columns. v LOW2KEY and HIGH2KEY columns are limited to storing the first 2000 bytes of a key value. If the column is nullable, values are limited to 1999 bytes. v The closer SYSINDEXES.CLUSTERRATIOF is to 100% (a value of 1), the more closely the ordering of the index entries matches the physical ordering of the
Chapter 33. Maintaining statistics in the catalog
911
table rows. Refer to Figure 103 on page 922 to see how an index with a high cluster ratio differs from an index with a low cluster ratio. For information about using the RUNSTATS utility, see Part 2 of DB2 Utility Guide and Reference.
If you run RUNSTATS for separate partitions of a table space, DB2 uses the results to update the aggregate statistics for the entire table space. For recommendations about running RUNSTATS on separate partitions, see Gathering monitor statistics and update statistics on page 916. (You should either run RUNSTATS once on the entire object before collecting statistics on separate partitions or use the appropriate option to ensure that the statistics are aggregated appropriately, especially if some partitions are not loaded with data.)
912
Administration Guide
History statistics
Several catalog tables provide historical statistics for other catalog tables. These catalog history tables include: v SYSIBM.SYSCOLDIST_HIST v SYSIBM.SYSCOLUMNS_HIST v SYSIBM.SYSINDEXES_HIST v SYSIBM.SYSINDEXPART_HIST v SYSIBM.SYSINDEXSTATS_HIST v SYSIBM.SYSLOBSTATS_HIST v SYSIBM.SYSTABLEPART_HIST v SYSIBM.SYSTABLES_HIST v SYSIBM.SYSTABSTATS_HIST For example, SYSIBM.SYSTABLESPACE_HIST provides statistics for activity in SYSIBM.SYSTABLESPACE, SYSIBM.SYSTABLEPART_HIST provides statistics for activity in SYSIBM.SYSTABLEPART, and so on. When DB2 adds or changes rows in a catalog table, DB2 might also write information about the rows to the corresponding catalog history table. Although the catalog history tables are not identical to their counterpart tables, they do contain the same columns for access path information and space utilization information. The history statistics provide a way to study trends, to determine when utilities, such as REORG, should be run for maintenance, and to aid in space management. Table 164 lists the catalog data that are collected for historical statistics. For information on how to gather these statistics, see Gathering monitor statistics and update statistics on page 916.
Table 164. Catalog data collected for historical statistics Provides access path statistics1 Provides space statistics
Description
No No No No
Number of distinct values gathered Identifies the columns involved in multi-column statistics Frequently occurring value in the key distribution A number, which multiplied by 100, gives the percentage of rows that contain the value of COLVALUE Number of columns involved in multi-column statistics Type of statistics gathered, either cardinality (c) or frequent value (F)
Yes Yes
No No
No No No
Estimated number of distinct values in the column Second highest value of the column, or blank Second lowest value of the column, or blank
913
Table 164. Catalog data collected for historical statistics (continued) Provides access path statistics1 Yes Yes Yes Yes Yes Yes Provides space statistics No No No No No No
Column name CLUSTERING CLUSTERRATIOF FIRSTKEYCARDF FULLKEYCARDF NLEAF NLEVELS SYSIBM.SYSINDEXPART_HIST CARDF DSNUM EXTENTS
Description Whether the index was created with CLUSTER A number, when multiplied by 100, gives the percentage of rows in the clustering order Number of distinct values in the first key column Number of distinct values in the full key Number of active leaf pages Number of levels in the index tree
No No No
Number of rows or LOBs referenced Number of data sets Number of data set extents (when there are multiple pieces, the value is for the extents in the last data set) Number of rows referenced far from the optimal position 100 times the number of pages between successive leaf pages Number of leaf pages located physically far away from previous leaf pages for successive active leaf pages accessed in an index scan Number of leaf pages located physically near previous leaf pages for successive active leaf pages Number of rows referenced near but not at the optimal position Primary space allocation in 4K blocks for the data set Number of pseudo deleted keys Secondary space allocation in 4K blocks for the data set. Disk storage in KB
No No No
LEAFNEAR
No
Yes
NEAROFFPOSF PQTY PSEUDO_DEL_ENTRIES SECQTYI SPACEF SYSIBM.SYSINDEXSTATS_HIST CLUSTERRATIO FIRSTKEYCARDF FULLKEYCARDF KEYCOUNTF NLEAF NLEVELS SYSIBM.SYSLOBSTATS_HIST FREESPACE
No No No No No
No No No No No No
A number, which when multiplied by 100, gives the percentage of rows in the clustering order Number of distinct values of the first key column Number of distinct values of the full key Total number of rows in the partition Number of leaf pages Number of levels in the index tree
No
Yes
914
Administration Guide
Table 164. Catalog data collected for historical statistics (continued) Provides access path statistics1 No Provides space statistics Yes
Column name
Description The percentage of organization in the LOB table space. A value of 100 indicates perfect organization of the LOB table space. A value of 1 indicates that the LOB table space is disorganized. A value of 0.00 indicates that the LOB table space is totally disorganized. An empty table space has an ORGRATIO value of 100.00.
# ORGRATIO # # # # # #
SYSIBM.SYSTABLEPART_HIST CARDF DSNUM EXTENTS
No No No
Number of rows in the table space or partition Number of data sets Number of data set extents (when there are multiple pieces, the value is for the extents in the last data set) Number of rows relocated far from their original position Number of rows relocated near their original position Percentage of pages saved by data compression Percentage of space occupied by active pages Percentage of space occupied by pages from dropped tables Primary space allocation in 4K blocks for the data set Secondary space allocation in 4K blocks for the data set. The number of KB of space currently used
FARINDREF NEARINDREF PAGESAVE PERCACTIVE PERCDROP PQTY SECQTYI SPACEF SYSIBM.SYSTABLES_HIST AVGROWLEN CARDF NPAGESF PCTPAGES PCTROWCOMP SYSIBM.SYSTABSTATS_HIST CARDF NPAGES
1
No No No No No No No No
Yes No No Yes No
Average row length of the table specified in the table space Number of rows in the table or number of LOBs in an auxiliary table Number of pages used by the table Percentage of pages that contain rows Percentage of active rows compressed
Yes Yes
No No
Note: The access path statistics in the history tables are collected for historical purposes and are not used for access path selection.
915
916
Administration Guide
ROLLUP field on installation panel DSNTIPO is YES). If you do not use the keyword or installation panel field setting to force the roll up of the aggregate statistics, you must run utilities once on the entire object before running utilities on separate partitions. Collecting history statistics: When you collect statistics with RUNSTATS or gather them inline with the LOAD, REBUILD, or REORG utilities, you can use the HISTORY option to collect history statistics. With the HISTORY option, the utility stores the statistics that were updated in the catalog tables in history records in the corresponding catalog history tables. (For information on the catalog data that is collected for history statistics, seeTable 164 on page 913.) To remove old statistics that are no longer needed in the catalog history tables, use the MODIFY STATISTICS utility or the SQL DELETE statement. Deleting outdated information from the catalog history tables can help improve the performance of processes that access the data in these tables. Recommendations for performance: v To reduce the processor consumption WHEN collecting column statistics, use the SAMPLE option. The SAMPLE option allows you to specify a percentage of the rows to examine for column statistics. Consider the effect on access path selection before choosing sampling. There is likely to be little or no effect on access path selection if the access path has a matching index scan and very few predicates. However, if the access path joins of many tables with matching index scans and many predicates, the amount of sampling can affect the access path. In these cases, start with 25 percent sampling and see if there is a negative effect on access path selection. If not, you could consider reducing the sampling percent until you find the percent that gives you the best reduction in processing time without negatively affecting the access path. v To reduce the elapsed time of gathering statistics immediately after a LOAD, REBUILD INDEX, or REORG, gather statistics inline with those utilities by using the STATISTICS option.
917
v COLCARDF and FIRSTKEYCARDF. For a column that is the first column of an index, those two values are equal. If the index has only that one column, the two values are also equal to the value of FULLKEYCARDF. v COLCARDF, LOW2KEY, and HIGH2KEY. If the COLCARDF value is not -1, DB2 assumes that statistics exist for the column. In particular, it uses the values of LOW2KEY and HIGH2KEY in calculating filter factors. If COLDARDF = 1 or if COLCARDF = 2, DB2 uses HIGH2KEY and LOW2KEY as domain statistics, and generates frequencies on HIGH2KEY and LOW2KEY. v CARDF in SYSCOLDIST. CARDF is related to COLCARDF in SYSIBM.SYSCOLUMNS and to FIRSTKEYCARDF and FULLKEYCARDF in SYSIBM.SYSINDEXES. CARDF must be the minimum of the following: A value between FIRSTKEYCARDF and FULLKEYCARDF if the index contains the same set of columns A value between MAX(COLCARDF of each column in the column group) and the product of multiplying together the COLCARDF of each column in the column group Example: Assume the following set of statistics:
CARDF = 1000 NUMCOLUMNS = 3 COLGROUPCOLNO = 2,3,5 INDEX1 on columns 2,3,5,7,8 FIRSTKEYCARDF = 100 FULLKEYCARDF = 10000 column 2 COLCARDF = 100 column 3 COLCARDF = 50 column 5 COLCARDF = 10 CARDF must be between 100 and 10000
| | |
| | | | | | | |
The range between FIRSTKEYCARDF and FULLKEYCARDF is 100 and 10 000. The maximum of the COLCARDF values is 50 000. Thus, the allowable range is between 100 and 10 000. v CARDF in SYSTABLES. CARDF must be equal to or larger than any of the other cardinalities, SUCH AS COLCARDF, FIRSTKEYCARDF, FULLKEYCARDF, and CARDF in SYSIBM.SYSCOLDIST. v FREQUENCYF and COLCARDF or CARDF. The number of frequencies collected must be less than or equal to COLCARDF for the column or CARDF for the column group. v FREQUENCYF. The sum of frequencies collected for a column or column group must be less than or equal to 1.
918
Administration Guide
| | | | | | |
919
End of Product-sensitive Programming Interface If the statistics in the DB2 catalog no longer correspond to the true organization of your data, you should reorganize the necessary tables, run RUNSTATS, and rebind the plans or packages that contain any affected queries. See When to reorganize indexes and table spaces on page 924 and the description of REORG in Part 2 of DB2 Utility Guide and Reference for information on how to determine which table spaces and indexes qualify for reorganization. This includes the DB2 catalog table spaces as well as user table spaces. Then DB2 has accurate information to choose appropriate access paths for your queries. Use the EXPLAIN statement to verify the chosen access paths for your queries.
920
Administration Guide
Space utilization statistics can also help you make sure that access paths that use the index or table space are as efficient as possible. By reducing gaps between leaf pages in an index, or to ensure that data pages are close together, you can reduce sequential I/Os. Recommendation: To provide the most accurate data, gather statistics routinely to provide data about table spaces and indexes over a period of time. Recommendation: Run RUNSTATS some time after reorganizing the data or indexes. By gathering the statistics after you reorganize, you ensure that access paths reflect a more average state of the data. This section describes the following topics: v How clustering affects access path selection v What other statistics provide index costs on page 923 v When to reorganize indexes and table spaces on page 924 v Whether to rebind after gathering statistics on page 927
| | | | | |
921
25
61
Root page
13
33
45
75
86
Intermediate pages
Leaf pages
Data page
Data page
Data page
Figure 103. A clustered index scan. This figure assumes that the index is 100% clustered.
922
Administration Guide
25
61
Root page
13
33
45
75
86
Intermediate pages
Leaf pages
Data page
Data page
Data page
Figure 104. A nonclustered index scan. In some cases, DB2 can access the data pages in order even when a nonclustered index is used.
923
NLEVELS: The number of levels in the index tree. NLEVELS is another portion of the cost to traverse the index. The same conditions as NLEAF apply. The smaller the number is, the less the cost is.
| | | | | | | | | | | | | | | |
Reorganizing Indexes
To understand index organization, you must understand the LEAFNEAR and LEAFFAR columns of SYSIBM.SYSINDEXPART. This section describes how to interpret those values and then describes some rules of thumb for determining when to reorganize the index.
924
Administration Guide
LEAFNEAR and LEAFFAR columns: The LEAFNEAR and LEAFFAR columns of SYSIBM.SYSINDEXPART measure the disorganization of physical leaf pages by indicating the number of pages that are not in an optimal position. Leaf pages can have page gaps whenever index pages are deleted or when there are index leaf page splits caused by an insert that cannot fit onto a full page. If the key cannot fit on the page, DB2 moves half the index entries onto a new page, which might be far away from the home page. Figure 105 shows the logical and physical view of an index.
Logical view
Root page 2
Physical view LEAFNEAR 2nd jump Leaf page 13 GARCIA Leaf page 16 HANSON Leaf page 17 DOYLE 1st jump LEAFFAR LEAFFAR 3rd jump Leaf page 78 FORESTER Leaf page 79 JACKSON
...
prefetch quantity
Figure 105. Logical and physical views of an index in which LEAFNEAR=1 and LEAFFAR=2
The logical view at the top of the figure shows that for an index scan four leaf pages need to be scanned to access the data for FORESTER through JACKSON. The physical view at the bottom of the figure shows how the pages are physically accessed. The first page is at physical leaf page 78, and the other leaf pages are at physical locations 79, 13, and 16. A jump forward or backward of more than one page represents non-optimal physical ordering. LEAFNEAR represents the number of jumps within the prefetch quantity, and LEAFFAR represents the number of jumps outside the prefetch quantity. In this example, assuming that the prefetch quantity is 32, there are two jumps outside the prefetch quantity. A jump from page 78 to page 13, and one from page 16 to page 79. Thus, LEAFFAR is 2. Because of the jump within the prefetch quantity from page 13 to page 16, LEAFNEAR is 1. LEAFNEAR has a smaller impact than LEAFFAR because the LEAFNEAR pages, which are located within the prefetch quantity, are typically read by prefetch without incurring extra I/Os.
925
The optimal value of the LEAFNEAR and LEAFFAR catalog columns is zero. However, immediately after you run the REORG and gather statistics, LEAFNEAR for a large index might be greater than zero. A non-zero value could be caused by free pages that result from the FREEPAGE option on CREATE INDEX, non-leaf pages, or various system pages; the jumps over these pages are included in LEAFNEAR. Recommendations: Fields in the SYSIBM.INDEXSPACESTATS table can help you determine when to reorganize an index. Consider running REORG INDEX when any of the following conditions are true. These conditions are based on values in the SYSIBM.INDEXSPACESTATS table. v REORGPSEUDODELETES/TOTALENTRIES > 10% in a non-data sharing environment, or REORGPSEUDODELETES/TOTALENTRIES > 5% in a data sharing environment. v REORGLEAFFAR/NACTIVE > 10% v REORGINSERTS/TOTALENTRIES > 25% v REORGDELETES/TOTALENTRIES > 25% v REORGAPPENDINSERT/TOTALENTRIES > 20% v EXTENTS > 254 You should also consider running REORG if one of the following conditions are true: v The index is in the advisory REORG-pending state (AREO*) as a result of an ALTER statement. v An index is in the advisory REBUILD-pending state (ARBDP) as a result an ALTER statement.
You should also consider running REORG if one of the following conditions are true: v The table space is in the advisory REORG-pending state (AREO*) as a result of an ALTER TABLE statement.
926
Administration Guide
v An index on a table in the table space is in the advisory REBUILD-pending state (ARBDP) as result an ALTER TABLE statement.
v NACTIVEF changes more than 20% from the previous value. v The range of HIGH2KEY to LOW2KEY range changes more than 20% from the range previously recorded. v Cardinality changes more than 20% from previous range. v Distribution statistics change the majority of the frequent column values.
927
AND TBL.CREATOR IN (table creator_list) AND TBL.NAME IN (table_list) AND (NACTIVEF >=0 OR NACTIVE >=0); SELECT 'UPDATE SYSIBM.SYSTABLES SET CARDF=' CONCAT STRIP(CHAR(CARDF)) CONCAT',NPAGES='CONCAT STRIP(CHAR(NPAGES)) CONCAT',PCTROWCOMP='CONCAT STRIP(CHAR(PCTROWCOMP)) CONCAT ' WHERE NAME='''CONCAT NAME CONCAT ''' AND CREATOR ='''CONCAT CREATOR CONCAT'''*' FROM SYSIBM.SYSTABLES WHERE CREATOR IN (creator_list) AND NAME IN (table_list) AND CARDF >= 0; SELECT 'UPDATE SYSIBM.SYSINDEXES SET FIRSTKEYCARDF=' CONCAT STRIP(CHAR(FIRSTKEYCARDF)) CONCAT ',FULLKEYCARDF='CONCAT STRIP(CHAR(FULLKEYCARDF)) CONCAT',NLEAF='CONCAT STRIP(CHAR(NLEAF)) CONCAT',NLEVELS='CONCAT STRIP(CHAR(NLEVELS)) CONCAT',CLUSTERRATIO='CONCAT STRIP(CHAR(CLUSTERRATIO)) CONCAT',CLUSTERRATIOF='CONCAT STRIP(CHAR(CLUSTERRATIOF)) CONCAT' WHERE NAME='''CONCAT NAME CONCAT ''' AND CREATOR ='''CONCAT CREATOR CONCAT'''*' FROM SYSIBM.SYSINDEXES WHERE TBCREATOR IN (creator_list) AND TBNAME IN (table_list) AND FULLKEYCARDF >= 0; SELECT 'UPDATE SYSIBM.SYSCOLUMNS SET COLCARDF=' CONCAT STRIP(CHAR(COLCARDF)) CONCAT',HIGH2KEY= X''' CONCAT HEX(HIGH2KEY) CONCAT''',LOW2KEY= X''' CONCAT HEX(LOW2KEY) CONCAT''' WHERE TBNAME=''' CONCAT TBNAME CONCAT ''' AND COLNO=' CONCAT STRIP(CHAR(COLNO)) CONCAT ' AND TBCREATOR =''' CONCAT TBCREATOR CONCAT'''*' FROM SYSIBM.SYSCOLUMNS WHERE TBCREATOR IN (creator_list) AND TBNAME IN (table_list) AND COLCARDF >= 0;
SYSTABSTATS and SYSCOLDIST require deletes and inserts. Delete statistics from SYSTABSTATS on the test subsystem for the specified tables by using the following statement:
DELETE FROM (TEST_SUBSYSTEM).SYSTABSTATS WHERE OWNER IN (creator_list) AND NAME IN (table_list)
Use INSERT statements to repopulate SYSTABSTATS with production statistics that are generated from the following statement:
SELECT CONCAT CONCAT CONCAT CONCAT CONCAT CONCAT CONCAT CONCAT CONCAT CONCAT CONCAT CONCAT CONCAT 'INSERT INTO SYSIBM.SYSTABSTATS' '(CARD,NPAGES,PCTPAGES,NACTIVE,PCTROWCOMP' ',STATSTIME,IBMREQD,DBNAME,TSNAME,PARTITION' ',OWNER,NAME,CARDF) VALUES(' STRIP(CHAR(CARD)) CONCAT ' ,' STRIP(CHAR(NPAGES)) CONCAT ' ,' STRIP(CHAR(PCTPAGES)) CONCAT ' ,' STRIP(CHAR(NACTIVE)) CONCAT ' ,' STRIP(CHAR(PCTROWCOMP)) CONCAT ' ,' '''' CONCAT CHAR(STATSTIME) CONCAT ''' ,' '''' CONCAT IBMREQD CONCAT ''' ,' '''' CONCAT STRIP(DBNAME) CONCAT ''' ,' '''' CONCAT STRIP(TSNAME) CONCAT ''' ,' STRIP(CHAR(PARTITION)) CONCAT ' ,'
928
Administration Guide
CONCAT '''' CONCAT STRIP(OWNER) CONCAT '''' CONCAT STRIP(NAME) CONCAT STRIP(CHAR(CARDF)) FROM SYSIBM.SYSTABSTATS WHERE OWNER IN (creator_list) AND NAME IN (table_list);
Delete statistics from SYSCOLDIST on the test subsystem for the specified tables by using the following statement:
DELETE FROM (TEST_SUBSYSTEM).SYSCOLDIST WHERE TBOWNER IN (creator_list) AND TBNAME IN (table_list);
Use INSERT statements to repopulate SYSCOLDIST with production statistics that are generated from the following statement:
SELECT 'INSERT INTO SYSIBM.SYSCOLDIST ' CONCAT '(FREQUENCY,STATSTIME,IBMREQD,TBOWNER' CONCAT ',TBNAME,NAME,COLVALUE,TYPE,CARDF,COLGROUPCOLNO' CONCAT ',NUMCOLUMNS,FREQUENCYF) VALUES( ' CONCAT STRIP(CHAR(FREQUENCY)) CONCAT ' ,' CONCAT '''' CONCAT CHAR(STATSTIME) CONCAT ''' ,' CONCAT '''' CONCAT IBMREQD CONCAT ''' ,' CONCAT '''' CONCAT STRIP(TBOWNER) CONCAT ''' ,' CONCAT '''' CONCAT STRIP(TBNAME) CONCAT ''',' CONCAT '''' CONCAT STRIP(NAME) CONCAT ''' ,' CONCAT 'X''' CONCAT STRIP(HEX(COLVALUE)) CONCAT ''' ,' CONCAT '''' CONCAT TYPE CONCAT ''' ,' CONCAT STRIP(CHAR(CARDF)) CONCAT ' ,' CONCAT 'X'''CONCAT STRIP(HEX(COLGROUPCOLNO)) CONCAT ''' ,' CONCAT CHAR(NUMCOLUMNS) CONCAT ' ,' CONCAT STRIP(CHAR(FREQUENCYF)) CONCAT ')*' FROM SYSIBM.SYSCOLDIST WHERE TBOWNER IN (creator_list) AND TBNAME IN (table_list);
Note about SPUFI: v If you use SPUFI to execute the preceding SQL statements, you might need to increase the default maximum character column width to avoid truncation. v Asterisks (*) appear in the examples to avoid having the semicolon interpreted as the end of the SQL statement. Edit the result to change the asterisk to a semicolon. Access path differences from test to production: When you bind applications on the test system with production statistics, access paths should be similar but still may be different to what you see when the same query is bound on your production system. The access paths from test to production could be different for the following possible reasons: v The processor models are different. v The number of processors are different. (Differences in the number of processors can affect the degree of parallelism that is obtained.) v The buffer pool sizes are different. v The RID pool sizes are different. v Data in SYSIBM.SYSCOLDIST is mismatched. (This mismatch occurs only if some of the previously mentioned steps mentioned are not followed exactly). v The service levels are different. v The values of optimization subsystem parameters, such as STARJOIN, NPGTHRSH, and PARAMDEG (MAX DEGREE on installation panel DSNTIP4) are different.
Chapter 33. Maintaining statistics in the catalog
929
v The use of techniques such as optimization hints and volatile tables are different. Tools to help: If your production system is accessible from your test system, you can use DB2 PM EXPLAIN on your test system to request EXPLAIN information from your production system. This request can reduce the need to simulate a production system by updating the catalog. You can also use the DB2 Visual Explain feature to display the current PLAN_TABLE output or the graphed access paths for statements within any particular subsystem from your workstation environment. For example, if you have your test system on one subsystem and your production system on another subsystem, you can visually compare the PLAN_TABLE outputs or access paths simultaneously with some window or view manipulation. You can then access the catalog statistics for certain referenced objects of an access path from either of the displayed PLAN_TABLEs or access path graphs. For information on using Visual Explain, see DB2 Visual Explain online help.
930
Administration Guide
931
# # # # #
triggers, and utilities. For instance, you can use DB2 Estimator for estimating the impact of adding or dropping an index from a table, estimating the change in response time from adding processor resources, and estimating the amount of time a utility job will take to run. DB2 Estimator for Windows can be downloaded from the Web. v DB2-supplied EXPLAIN stored procedure. Users with authority to run EXPLAIN directly can obtain access path information by calling the DB2-supplied EXPLAIN stored procedure. For more information about the DB2-supplied EXPLAIN stored procedure, see Appendix J, DB2-supplied stored procedures, on page 1261. Chapter overview: This chapter includes the following topics: v Obtaining PLAN_TABLE information from EXPLAIN v Asking questions about data access on page 943 v Interpreting access to a single table on page 951 v Interpreting access to two or more tables (join) on page 959 v Interpreting data prefetch on page 973 v Determining sort activity on page 977 v Processing for views and nested table expressions on page 979 v Estimating a statements cost on page 985 See also Chapter 35, Parallel operations and query performance, on page 991.
932
Administration Guide
Creating PLAN_TABLE
| | Before you can use EXPLAIN, a PLAN_TABLE must be created to hold the results of EXPLAIN. A copy of the statements that are needed to create the table are in the DB2 sample library, under the member name DSNTESC. (Unless you need the information that they provide, you do not need to create a function table or statement table to use EXPLAIN.) Important:: If mixed data strings are allowed on a DB2 subsystem, EXPLAIN tables must be created with CCSID UNICODE. This includes, but is not limited to, mixed data strings that are used for tokens, SQL statements, application names, program names, correlation names, and collection IDs. Figure 106 on page 934 shows most current format of a plan table, which consists of 58 columns. Table 165 on page 936 shows the content of each column.
# # # # # | |
933
| | |
| |
| | # # # #
| |
| | | | | | |
CREATE TABLE userid.PLAN_TABLE (QUERYNO INTEGER QBLOCKNO SMALLINT APPLNAME CHAR(8) PROGNAME VARCHAR(128) PLANNO SMALLINT METHOD SMALLINT CREATOR VARCHAR(128) TNAME VARCHAR(128) TABNO SMALLINT ACCESSTYPE CHAR(2) MATCHCOLS SMALLINT ACCESSCREATOR VARCHAR(128) ACCESSNAME VARCHAR(128) INDEXONLY CHAR(1) SORTN_UNIQ CHAR(1) SORTN_JOIN CHAR(1) SORTN_ORDERBY CHAR(1) SORTN_GROUPBY CHAR(1) SORTC_UNIQ CHAR(1) SORTC_JOIN CHAR(1) SORTC_ORDERBY CHAR(1) SORTC_GROUPBY CHAR(1) TSLOCKMODE CHAR(3) TIMESTAMP CHAR(16) REMARKS VARCHAR(762) PREFETCH CHAR(1) COLUMN_FN_EVAL CHAR(1) MIXOPSEQ SMALLINT VERSION VARCHAR(64) COLLID VARCHAR(128) ACCESS_DEGREE SMALLINT ACCESS_PGROUP_ID SMALLINT JOIN_DEGREE SMALLINT JOIN_PGROUP_ID SMALLINT SORTC_PGROUP_ID SMALLINT SORTN_PGROUP_ID SMALLINT PARALLELISM_MODE CHAR(1) MERGE_JOIN_COLS SMALLINT CORRELATION_NAME VARCHAR(128) PAGE_RANGE CHAR(1) JOIN_TYPE CHAR(1) GROUP_MEMBER CHAR(8) IBM_SERVICE_DATA VARCHAR(254) WHEN_OPTIMIZE CHAR(1) QBLOCK_TYPE CHAR(6) BIND_TIME TIMESTAMP OPTHINT VARCHAR(128) HINT_USED VARCHAR(128) PRIMARY_ACCESSTYPE CHAR(1) PARENT_QBLOCKNO SMALLINT TABLE_TYPE CHAR(1) TABLE_ENCODE CHAR(1) TABLE_SCCSID SMALLINT TABLE_MCCSID SMALLINT TABLE_DCCSID SMALLINT ROUTINE_ID INTEGER CTEREF SMALLINT STMTTOKEN VARCHAR(240)) IN database-name.table-space-name CCSID EBCDIC; Figure 106. 58-column format of PLAN_TABLE
NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT , , , , , , , , , NOT NOT NOT FOR NOT NOT NOT NOT NOT NOT NOT , NOT NOT NOT NOT NOT NOT
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL WITH NULL WITH NULL WITH NULL WITH NULL WITH
NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, BIT DATA NOT NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, NULL NULL NULL NULL NULL NULL WITH WITH WITH WITH WITH WITH DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
934
Administration Guide
| | | | | | | | | | | | |
Your plan table can use many other formats with fewer columns, as shown in Figure 107. However, use the 58-column format because it gives you the most information. If you alter an existing plan table with fewer than 58 columns to the 58-column format: v If they exist, change the data type of columns: PROGNAME, CREATOR, TNAME, ACCESSTYPE, ACCESSNAME, REMARKS, COLLID, CORRELATION_NAME, IBM_SERVICE_DATA, OPTHINT, and HINT_USED. Use the values shown in Figure 106 on page 934. v Add the missing columns to the table. Use the column definitions shown in Figure 106 on page 934. For most columns added, specify NOT NULL WITH DEFAULT so that default values are included for the rows in the table. However, as the figure shows, certain columns do allow nulls. Do not specify those columns as NOT NULL WITH DEFAULT.
QUERYNO INTEGER NOT NULL QBLOCKNO SMALLINT NOT NULL APPLNAME CHAR(8) NOT NULL PROGNAME CHAR(8) NOT NULL PLANNO SMALLINT NOT NULL METHOD SMALLINT NOT NULL CREATOR CHAR(8) NOT NULL TNAME CHAR(18) NOT NULL TABNO SMALLINT NOT NULL ACCESSTYPE CHAR(2) NOT NULL MATCHCOLS SMALLINT NOT NULL ACCESSCREATOR CHAR(8) NOT NULL ACCESSNAME CHAR(18) NOT NULL INDEXONLY CHAR(1) NOT NULL SORTN_UNIQ CHAR(1) NOT NULL SORTN_JOIN CHAR(1) NOT NULL SORTN_ORDERBY CHAR(1) NOT NULL SORTN_GROUPBY CHAR(1) NOT NULL SORTC_UNIQ CHAR(1) NOT NULL SORTC_JOIN CHAR(1) NOT NULL SORTC_ORDERBY CHAR(1) NOT NULL SORTC_GROUPBY CHAR(1) NOT NULL TSLOCKMODE CHAR(3) NOT NULL TIMESTAMP CHAR(16) NOT NULL REMARKS VARCHAR(254) NOT NULL ----------------25 column format--------------PREFETCH CHAR(1) NOT NULL WITH DEFAULT COLUMN_FN_EVAL CHAR(1) NOT NULL WITH DEFAULT MIXOPSEQ SMALLINT NOT NULL WITH DEFAULT ----------------28 column format--------------VERSION VARCHAR(64) NOT NULL WITH DEFAULT COLLID CHAR(18) NOT NULL WITH DEFAULT ----------------30 column format--------------ACCESS_DEGREE SMALLINT ACCESS_PGROUP_ID SMALLINT JOIN_DEGREE SMALLINT JOIN_PGROUP_ID SMALLINT ------------------34 column format---------------SORTC_PGROUP_ID SMALLINT SORTN_PGROUP_ID SMALLINT PARALLELISM_MODE CHAR(1) MERGE_JOIN_COLS SMALLINT CORRELATION_NAME CHAR(18) PAGE_RANGE CHAR(1) NOT NULL WITH DEFAULT JOIN_TYPE CHAR(1) NOT NULL WITH DEFAULT GROUP_MEMBER CHAR(8) NOT NULL WITH DEFAULT IBM_SERVICE_DATA VARCHAR(254) NOT NULL WITH DEFAULT ------------------43 column format---------------WHEN_OPTIMIZE CHAR(1) NOT NULL WITH DEFAULT QBLOCK_TYPE CHAR(6) NOT NULL WITH DEFAULT BIND_TIME TIMESTAMP NOT NULL WITH DEFAULT ------------------46 column format---------------OPTHINT CHAR(8) NOT NULL WITH DEFAULT HINT_USED CHAR(8) NOT NULL WITH DEFAULT PRIMARY_ACCESSTYPE CHAR(1) NOT NULL WITH DEFAULT ------------------49 column format---------------PARENT_QBLOCKNO SMALLINT NOT NULL WITH DEFAULT TABLE_TYPE CHAR(1) ------------------51 column format----------------
Table 165 on page 936 shows the descriptions of the columns in PLAN_TABLE.
935
Table 165. Descriptions of columns in PLAN_TABLE Column Name QUERYNO Description A number intended to identify the statement being explained. For a row produced by an EXPLAIN statement, specify the number in the QUERYNO clause. For a row produced by non-EXPLAIN statements, specify the number using the QUERYNO clause, which is an optional part of the SELECT, INSERT, UPDATE and DELETE statement syntax. Otherwise, DB2 assigns a number based on the line number of the SQL statement in the source program. When the values of QUERYNO are based on the statement number in the source program, values greater than 32767 are reported as 0. However, in a very long program, the value is not guaranteed to be unique. If QUERYNO is not unique, the value of TIMESTAMP is unique.
| QBLOCKNO |
APPLNAME
A number that identifies each query block within a query. The value of the numbers are not in any particular order, nor are they necessarily consecutive. The name of the application plan for the row. Applies only to embedded EXPLAIN statements executed from a plan or to statements explained when binding a plan. Blank if not applicable. The name of the program or package containing the statement being explained. Applies only to embedded EXPLAIN statements and to statements explained as the result of binding a plan or package. Blank if not applicable. The number of the step in which the query indicated in QBLOCKNO was processed. This column indicates the order in which the steps were executed. A number (0, 1, 2, 3, or 4) that indicates the join method used for the step: 0 1 2 3 First table accessed, continuation of previous table accessed, or not used. Nested loop join. For each row of the present composite table, matching rows of a new table are found and joined. Merge scan join. The present composite table and the new table are scanned in the order of the join columns, and matching rows are joined. Sorts needed by ORDER BY, GROUP BY, SELECT DISTINCT, UNION, a quantified predicate, or an IN predicate. This step does not access a new table. Hybrid join. The current composite table is scanned in the order of the join-column rows of the new table. The new table is accessed using list prefetch.
PROGNAME
PLANNO METHOD
CREATOR
The creator of the new table accessed in this step, blank if METHOD is 3. The name of a table, materialized query table, created or declared temporary table, materialized view, or materialized table expression. The value is blank if METHOD is 3. The column can also contain the name of a table in the form DSNWFQB(qblockno). DSNWFQB(qblockno) is used to represent the intermediate result of a UNION ALL or an outer join that is materialized. If a view is merged, the name of the view does not appear. Values are for IBM use only.
| TNAME |
TABNO
936
Administration Guide
Table 165. Descriptions of columns in PLAN_TABLE (continued) Column Name ACCESSTYPE Description The method of accessing the new table: I By an index (identified in ACCESSCREATOR and ACCESSNAME) I1 By a one-fetch index scan M By a multiple index scan (followed by MX, MI, or MU) MI By an intersection of multiple indexes MU By a union of multiple indexes MX By an index scan on the index named in ACCESSNAME N By an index scan when the matching predicate contains the IN keyword R By a table space scan RW By a work file scan of the result of a materialized user-defined table function T By a sparse index (star join work files) V By buffers for an INSERT statement within a SELECT blank Not applicable to the current row For ACCESSTYPE I, I1, N, or MX, the number of index keys used in an index scan; otherwise, 0. For ACCESSTYPE I, I1, N, or MX, the creator of the index; otherwise, blank. For ACCESSTYPE I, I1, N, or MX, the name of the index; otherwise, blank. Whether access to an index alone is enough to carry out the step, or whether data too must be accessed. Y=Yes; N=No. For exceptions, see Is the query satisfied using only the index? (INDEXONLY=Y) on page 946. Whether the new table is sorted to remove duplicate rows. Y=Yes; N=No. Whether the new table is sorted for join method 2 or 4. Y=Yes; N=No. Whether the new table is sorted for ORDER BY. Y=Yes; N=No. Whether the new table is sorted for GROUP BY. Y=Yes; N=No. Whether the composite table is sorted to remove duplicate rows. Y=Yes; N=No. Whether the composite table is sorted for join method 1, 2 or 4. Y=Yes; N=No. Whether the composite table is sorted for an ORDER BY clause or a quantified predicate. Y=Yes; N=No. Whether the composite table is sorted for a GROUP BY clause. Y=Yes; N=No. An indication of the mode of lock to be acquired on either the new table, or its table space or table space partitions. If the isolation can be determined at bind time, the values are: IS Intent share lock IX Intent exclusive lock S Share lock U Update lock X Exclusive lock SIX Share with intent exclusive lock N UR isolation; no lock If the isolation cannot be determined at bind time, then the lock mode determined by the isolation at run time is shown by the following values. NS For UR isolation, no lock; for CS, RS, or RR, an S lock. NIS For UR isolation, no lock; for CS, RS, or RR, an IS lock. NSS For UR isolation, no lock; for CS or RS, an IS lock; for RR, an S lock. SS For UR, CS, or RS isolation, an IS lock; for RR, an S lock. The data in this column is right justified. For example, IX appears as a blank followed by I followed by X. If the column contains a blank, then no lock is acquired. TIMESTAMP Usually, the time at which the row is processed, to the last .01 second. If necessary, DB2 adds .01 second to the value to ensure that rows for two successive queries have different values.
| | |
MATCHCOLS ACCESSCREATOR ACCESSNAME INDEXONLY
937
Table 165. Descriptions of columns in PLAN_TABLE (continued) Column Name REMARKS Description A field into which you can insert any character string of 762 or fewer characters. Whether data pages are to be read in advance by prefetch: S L D blank Pure sequential prefetch Prefetch through a page list Possible candidate for dynamic prefetch Unknown or no prefetch
When an SQL aggregate function is evaluated. R = while the data is being read from the table or index; S = while performing a sort to satisfy a GROUP BY clause; blank = after data retrieval and after any sorts. The sequence number of a step in a multiple index operation. 1, 2, ... n 0 For the steps of the multiple index procedure (ACCESSTYPE is MX, MI, or MU.) For any other rows (ACCESSTYPE is I, I1, M, N, R, or blank.)
The version identifier for the package. Applies only to an embedded EXPLAIN statement executed from a package or to a statement that is explained when binding a package. Blank if not applicable. The collection ID for the package. Applies only to an embedded EXPLAIN statement that is executed from a package or to a statement that is explained when binding a package. Blank if not applicable. The value DSNDYNAMICSQLCACHE indicates that the row is for a cached statement.
Note: The following nine columns, from ACCESS_DEGREE through CORRELATION_NAME, contain the null value if the plan or package was bound using a plan table with fewer than 43 columns. Otherwise, each of them can contain null if the method it refers to does not apply. ACCESS_DEGREE The number of parallel tasks or operations activated by a query. This value is determined at bind time; the actual number of parallel operations used at execution time could be different. This column contains 0 if there is a host variable. The identifier of the parallel group for accessing the new table. A parallel group is a set of consecutive operations, executed in parallel, that have the same number of parallel tasks. This value is determined at bind time; it could change at execution time. The number of parallel operations or tasks used in joining the composite table with the new table. This value is determined at bind time and can be 0 if there is a host variable. The actual number of parallel operations or tasks used at execution time could be different. The identifier of the parallel group for joining the composite table with the new table. This value is determined at bind time; it could change at execution time. The parallel group identifier for the parallel sort of the composite table. The parallel group identifier for the parallel sort of the new table. The kind of parallelism, if any, that is used at bind time: I Query I/O parallelism C Query CP parallelism X Sysplex query parallelism The number of columns that are joined during a merge scan join (Method=2). The correlation name of a table or view that is specified in the statement. If there is no correlation name, then the column is null. Whether the table qualifies for page range screening, so that plans scan only the partitions that are needed. Y = Yes; blank = No.
ACCESS_PGROUP_ID
JOIN_DEGREE
938
Administration Guide
Table 165. Descriptions of columns in PLAN_TABLE (continued) Column Name JOIN_TYPE Description The type of join: F FULL OUTER JOIN L LEFT OUTER JOIN S STAR JOIN blank INNER JOIN or no join RIGHT OUTER JOIN converts to a LEFT OUTER JOIN when you use it, so that JOIN_TYPE contains L. The member name of the DB2 that executed EXPLAIN. The column is blank if the DB2 subsystem was not in a data sharing environment when EXPLAIN was executed. Values are for IBM use only. When the access path was determined: blank B At bind time, using a default filter factor for any host variables, parameter markers, or special registers. At bind time, using a default filter factor for any host variables, parameter markers, or special registers; however, the statement is reoptimized at run time using input variable values for input host variables, parameter markers, or special registers. The bind option REOPT(ALWAYS) or REOPT(ONCE) must be specified for reoptimization to occur. At run time, using input variables for any host variables, parameter markers, or special registers. The bind option REOPT(ALWAYS) or REOPT(ONCE) must be specified for this to occur.
QBLOCK_TYPE
For each query block, an indication of the type of SQL operation performed. For the outermost query, this column identifies the statement type. Possible values: SELECT SELECT INSERT INSERT UPDATE UPDATE DELETE DELETE SELUPD SELECT with FOR UPDATE OF DELCUR DELETE WHERE CURRENT OF CURSOR UPDCUR UPDATE WHERE CURRENT OF CURSOR CORSUB Correlated subselect or fullselect NCOSUB Noncorrelated subselect or fullselect TABLEX Table expression TRIGGR WHEN clause on CREATE TRIGGER UNION UNION UNIONA UNION ALL For static SQL statements, the time at which the plan or package for this statement or query block was bound. For cached dynamic SQL statements, the time at which the which the statement entered the cache. For static and cached dynamic SQL statements, this is a full-precision timestamp value. For non-cached dynamic SQL statements, this is the value contained in the TIMESTAMP column of PLAN_TABLE appended by 4 zeroes. A string that you use to identify this row as an optimization hint for DB2. DB2 uses this row as input when choosing an access path. If DB2 used one of your optimization hints, it puts the identifier for that hint (the value in OPTHINT) in this column.
# BIND_TIME # # # # #
OPTHINT HINT_USED
939
Table 165. Descriptions of columns in PLAN_TABLE (continued) Column Name PRIMARY_ACCESSTYPE Description Indicates whether direct row access will be attempted first: D DB2 will try to use direct row access. If DB2 cannot use direct row access at run time, it uses the access path described in the ACCESSTYPE column of PLAN_TABLE. See Is direct row access possible? (PRIMARY_ACCESSTYPE = D) on page 946 for more information. DB2 will not try to use direct row access.
A number that indicates the QBLOCKNO of the parent query block. The type of new table: B Buffers for an INSERT statement within a SELECT C Common table expression F Table function M Materialized query table Q Temporary intermediate result table (not materialized). For the name of a view or nested table expression, a value of Q indicates that the materializaition was virtual and not actual. Materialization can be virtual when the view or nested table expression definition contains a UNION ALL that is not distributed. R Recursive common table expression T Table W Work file The value of the column is null if the query uses GROUP BY, ORDER BY, or DISTINCT, which requires an implicit sort. The encoding scheme of the table. If the table has a single CCSID set, possible values are: A ASCII E EBCDIC U Unicode M is the value of the column when the table contains muliple CCSID set, the value of the column is M. The SBCS CCSID value of the table. If column TABLE_ENCODE is M, the value is 0. The mixed CCSID value of the table. If column TABLE_ENCODE is M, the value is 0. If MIXED=NO in the DSHDECP module, the value is -2. The DBCS CCSID value of the table. If column TABLE_ENCODE is M, the value is 0. If MIXED=NO in the DSHDECP module, the value is -2. Values are for IBM use only. If the referenced table is a common table expression, the value is the top-level query block number. User-specified statement token.
| | |
940
Administration Guide
For tips on maintaining a growing plan table, see Maintaining a plan table on page 943.
941
Original Static SQL DECLARE C1 CURSOR FOR SELECT * FROM T1 WHERE C1 > HOST VAR.
QMF Query Using Parameter Marker EXPLAIN PLAN SET QUERYNO=1 FOR SELECT * FROM T1 WHERE C1 > ?
QMF Query Using Literal EXPLAIN PLAN SET QUERYNO=1 FOR SELECT * FROM T1 WHERE C1 > 10
Using the literal 10 would likely produce a different filter factor and maybe a different access path from the original static SQL. (A filter factor is the proportion of rows that remain after a predicate has filtered out the rows that do not satisfy it. For more information about filter factors, see Predicate filter factors on page 766.) The parameter marker behaves just like a host variable, in that the predicate is assigned a default filter factor. When to use a literal: If you know that the static plan or package was bound with REOPT(ALWAYS) and you have some idea of what is returned in the host variable, it can be more accurate to include the literal in the DB2 QMF EXPLAIN. REOPT(ALWAYS) means that DB2 will replace the value of the host variable with the true value at run time and then determine the access path. For more information about REOPT(ALWAYS) see Changing the access path at run time on page 779. Expect these differences: Even when using parameter markers, you could see different access paths for static and dynamic queries. DB2 assumes that the value that replaces a parameter marker has the same length and precision as the column it is compared to. That assumption determines whether the predicate is stage 1 indexable or stage 2, which is always nonindexable. | | | | | | | | | | | | | | | | | | | | | If the column definition and the host variable definition are both strings, the predicate becomes stage 1 but not indexable when any of the following conditions are true: v The column definition is CHAR or VARCHAR, and the host variable definition is GRAPHIC or VARGRAPHIC. v The column definition is GRAPHIC or VARGRAPHIC, the host variable definition is CHAR or VARCHAR, and the length of the column definition is less than the length of the host variable definition. v Both the column definition and the host variable definition are CHAR or VARCHAR, the length of the column definition is less than the length of the host variable definition, and the comparison operator is any operator other than =. v Both the column definition and the host variable definition are GRAPHIC or VARGRAPHIC, the length of the column definition is less than the length of the host variable definition, and the comparison operator is any operator other than =. The predicate becomes stage 2 when any of the following conditions are true: v The column definition is DECIMAL(p,s), where p>15, and the host variable definition is REAL or FLOAT. v The column definition is CHAR, VARCHAR, GRAPHIC, or VARGRAPHIC, and the host variable definition is DATE, TIME, or TIMESTAMP.
942
Administration Guide
The result of the ORDER BY clause shows whether there are: v Multiple QBLOCKNOs within a QUERYNO v Multiple PLANNOs within a QBLOCKNO v Multiple MIXOPSEQs within a PLANNO All rows with the same non-zero value for QBLOCKNO and the same value for QUERYNO relate to a step within the query. QBLOCKNOs are not necessarily executed in the order shown in PLAN_TABLE. But within a QBLOCKNO, the PLANNO column gives the substeps in the order they execute. For each substep, the TNAME column identifies the table accessed. Sorts can be shown as part of a table access or as a separate step. What if QUERYNO=0? For entries that contain QUERYNO=0, use the timestamp, which is guaranteed to be unique, to distinguish individual statements.
COLLID gives the COLLECTION name, and PROGNAME gives the PACKAGE_ID. The following query to a plan table return the rows for all the explainable statements in a package in their logical order:
SELECT * FROM JOE.PLAN_TABLE WHERE PROGNAME = 'PACK1' AND COLLID = 'COLL1' AND VERSION = ORDER BY QUERYNO, QBLOCKNO, PLANNO, MIXOPSEQ; 'PROD1'
943
v v v v v v v v v
Is direct row access possible? (PRIMARY_ACCESSTYPE = D) on page 946 Is a view or nested table expression materialized? on page 948 Was a scan limited to certain partitions? (PAGE_RANGE=Y) on page 948 What kind of prefetching is expected? (PREFETCH = L, S, D, or blank) on page 949 Is data accessed or processed in parallel? (PARALLELISM_MODE is I, C, or X) on page 949 Are sorts performed? on page 949 Is a subquery transformed into a join? on page 950 When are aggregate functions evaluated? (COLUMN_FN_EVAL) on page 950 How many index screening columns are used? on page 950
v Is a complex trigger WHEN clause used? (QBLOCKTYPE=TRIGGR) on page 950 As explained in this section, they can be answered in terms of values in columns of a plan table.
DB2 processes the query by performing the following steps: 1. DB2 retrieves all the qualifying record identifiers (RIDs) where C1=1, by using index IX1. 2. DB2 retrieves all the qualifying RIDs where C2=1, by using index IX2. The intersection of these lists is the final set of RIDs. 3. DB2 accesses the data pages that are needed to retrieve the qualified rows by using the final RID list. The plan table for this example is shown in Table 166.
Table 166. PLAN_TABLE output for example with intersection (AND) operator ACCESSTNAME TYPE T T M MX MATCHCOLS 0 1 IX1 ACCESSNAME INDEXONLY N Y PREFETCH L MIXOPSEQ 0 1
944
Administration Guide
Table 166. PLAN_TABLE output for example with intersection (AND) operator (continued) ACCESSTNAME TYPE T T MX MI MATCHCOLS 1 0 ACCESSNAME IX2 INDEXONLY Y N PREFETCH MIXOPSEQ 2 3
In this case, the same index can be used more than once in a multiple index access because more than one predicate could be matching. DB2 processes the query by performing the following steps: 1. DB2 retrieves all RIDs where C1 is between 100 and 199, using index IX1. 2. DB2 retrieves all RIDs where C1 is between 500 and 599, again using IX1. The union of those lists is the final set of RIDs. 3. DB2 retrieves the qualified rows by using the final RID list. The plan table for this example is shown in Table 167.
Table 167. PLAN_TABLE output for example with union (OR) operator ACCESSTNAME TYPE T T T T M MX MX MU MATCHCOLS 0 1 1 0 IX1 IX1 ACCESSNAME INDEXONLY N Y Y N PREFETCH L MIXOPSEQ 0 1 2 3
The index XEMP5 is the chosen access path for this query, with MATCHCOLS = 3. Two equal predicates are on the first two columns and a range predicate is on the third column. Though the index has four columns in the index, only three of them can be considered matching columns.
Chapter 34. Using EXPLAIN to improve SQL performance
945
| | | | | |
946
Administration Guide
However, just because a query qualifies for direct row access does not mean that access path is always chosen. If DB2 determines that another access path is better, direct row access is not chosen. Examples: In the following predicate example, ID is a ROWID column in table T1. A unique index exists on that ID column. The host variables are of the ROWID type.
WHERE ID IN (:hv_rowid1,:hv_rowid2,:hv_rowid3)
Searching for propagated rows: If rows are propagated from one table to another, do not expect to use the same row ID value from the source table to search for the same row in the target table, or vice versa. This does not work when direct row access is the access path chosen. Example: Assume that the host variable in the following statement contains a row ID from SOURCE:
SELECT * FROM TARGET WHERE ID = :hv_rowid
Because the row ID location is not the same as in the source table, DB2 will probably not find that row. Search on another column to retrieve the row you want.
Reverting to ACCESSTYPE
Although DB2 might plan to use direct row access, circumstances can cause DB2 to not use direct row access at run time. DB2 remembers the location of the row as of the time it is accessed. However, that row can change locations (such as after a REORG) between the first and second time it is accessed, which means that DB2 cannot use direct row access to find the row on the second access attempt. Instead of using direct row access, DB2 uses the access path that is shown in the ACCESSTYPE column of PLAN_TABLE. If the predicate you are using to do direct row access is not indexable and if DB2 is unable to use direct row access, then DB2 uses a table space scan to find the row. This can have a profound impact on the performance of applications that rely on direct row access. Write your applications to handle the possibility that direct row access might not be used. Some options are to: v Ensure that your application does not try to remember ROWID columns across reorganizations of the table space. When your application commits, it releases its claim on the table space; it is possible that a REORG can run and move the row, which disables direct row access. Plan your commit processing accordingly; use the returned row ID value before committing, or re-select the row ID value after a commit is issued. If you are storing ROWID columns from another table, update those values after the table with the ROWID column is reorganized. v Create an index on the ROWID column, so that DB2 can use the index if direct row access is disabled. v Supplement the ROWID column predicate with another predicate that enables DB2 to use an existing index on the table. For example, after reading a row, an application might perform the following update:
947
EXEC SQL UPDATE EMP SET SALARY = :hv_salary + 1200 WHERE EMP_ROWID = :hv_emp_rowid AND EMPNO = :hv_empno;
If an index exists on EMPNO, DB2 can use index access if direct access fails. The additional predicate ensures DB2 does not revert to a table space scan.
Assume that table T has a partitioned index on column C1 and that values of C1 between 2002 and 3280 all appear in partitions 3 and 4 and the values between 6000 and 8000 appear in partitions 8 and 9. Assume also that T has another index on column C2. DB2 could choose any of these access methods: v A matching index scan on column C1. The scan reads index values and data only from partitions 3, 4, 8, and 9. (PAGE_RANGE=N) v A matching index scan on column C2. (DB2 might choose that if few rows have C2=6.) The matching index scan reads all RIDs for C2=6 from the index on C2 and corresponding data pages from partitions 3, 4, 8, and 9. (PAGE_RANGE=Y)
948
Administration Guide
v A table space scan on T. DB2 avoids reading data pages from any partitions except 3, 4, 8 and 9. (PAGE_RANGE=Y)
| |
Non-null values in columns ACCESS_DEGREE and JOIN_DEGREE indicate to what degree DB2 plans to use parallel operations. At execution time, however, DB2 might not actually use parallelism, or it might use fewer operations in parallel than were originally planned. For a more complete description, see Chapter 35, Parallel operations and query performance, on page 991. For more information about Sysplex query parallelism, see Chapter 6 of DB2 Data Sharing: Planning and Administration.
949
SORTC_UNIQ and SORTC_ORDERBY: SORTC_UNIQ indicates a sort to remove duplicates, as might be needed by a SELECT statement with DISTINCT or UNION. SORTC_ORDERBY usually indicates a sort for an ORDER BY clause. But SORTC_UNIQ and SORTC_ORDERBY also indicate when the results of a noncorrelated subquery are sorted, both to remove duplicates and to order the results. One sort does both the removal and the ordering.
Generally, values of R and S are considered better for performance than a blank. Use variance and standard deviation with care: The VARIANCE and STDDEV functions are always evaluated late (that is, COLUMN_FN_EVAL is blank). This causes other functions in the same query block to be evaluated late as well. For example, in the following query, the sum function is evaluated later than it would be if the variance function was not present:
SELECT SUM(C1), VARIANCE(C1) FROM T1;
| | | | | | | |
950
Administration Guide
| | | | | | | | | | | | | | | | | | |
which are clauses that involve other base tables and transition tables. The QBLOCK_TYPE column of the top level query block shows TRIGGR to indicate a complex trigger WHEN clause. Example: Consider the following trigger:
CREATE TRIGGER REORDER AFTER UPDATE OF ON_HAND, MAX_STOCKED ON PARTS REFERENCING NEW TABLE AS NT OLD AS O FOR EACH STATEMENT MODE DB2SQL WHEN (O.ON_HAND < (SELECT MAX(ON_HAND) FROM NT)) BEGIN ATOMIC INSERT INTO ORDER_LOG VALUES (O.PARTNO, O.ON_HAND); END
Table 168 shows the corresponding plan table for the WHEN clause.
Table 168. Plan table for the WHEN clause QBLOCKNO 1 2 PLANNO 1 1 NT R TABLE ACCESSTYPE QBLOCK_TYPE TRIGGR NCOSUB PARENT_QBLOCKNO 0 1
In this case, at least every row in T must be examined to determine whether the value of C1 matches the given value.
951
952
Administration Guide
An ascending index on C1 or an index on (C1,C2,C3) could eliminate a sort. For more information about OPTIMIZE FOR n ROWS, see Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS on page 795. | | | | | | | | | | | | | | | | | | | | | Backward index scan: In some cases, DB2 can use a backward index scan on a descending index to avoid a sort on ascending data. Similarly, an ascending index can be used to avoid a sort on descending data. For DB2 to use a backward index scan, the following conditions must be true: v The index includes the columns in the ORDER BY clause in the same order that they appear in the ORDER BY clause. v Each column in the sequence must have the opposite sequence (ASC or DESC) of the ORDER BY clause. Example: Suppose that an index exists on the ACCT_STAT table. The index is defined by the following columns: ACCT_NUM, STATUS_DATE, STATUS_TIME. All of the columns in the index are in ascending order. Now, consider the following SELECT statements:
SELECT STATUS_DATE, STATUS FROM ACCT_STAT WHERE ACCT_NUM = :HV ORDER BY STATUS_DATE DESC, STATUS_TIME DESC; SELECT STATUS_DATE, STATUS FROM ACCT_STAT WHERE ACCT_NUM = :HV ORDER BY STATUS_DATE ASC, STATUS_TIME ASC;
By using a backward index scan, DB2 can use the same index for both statements. Not all sorts are inefficient. For example, if the index that provides ordering is not an efficient one and many rows qualify, it is possible that using another access path to retrieve and then sort the data could be more efficient than the inefficient, ordering index. Indexes that are created to avoid sorts can sometimes be non-selective. If these indexes require data access and if the cluster ratio is poor, these indexes are unlikely to be chosen. Accessing many rows by using a poorly clustered index is often less efficient than accessing rows by using a table space scan and sort. Both table space scan and sort benefit from sequential access.
Costs of indexes
Before you begin creating indexes, consider carefully their costs: | | | | | | v Indexes require storage space. Padded indexes require more space than nonpadded indexes for long index keys. For short index keys, nonpadded indexes can take more space. v Each index requires an index space and a data set, or as many data sets as the number of data partitions if the index is partitioned, and operating system restrictions exist on the number of open data sets. v Indexes must be changed to reflect every insert or delete operation on the base table. If an update operation updates a column that is in the index, then the index must also be changed. The time required by these operations increases accordingly.
953
v Indexes can be built automatically when loading data, but this takes time. They must be recovered or rebuilt if the underlying table space is recovered, which might also be time-consuming. Recommendation: In reviewing the access paths described in Index access paths, consider indexes as part of your database design, See Part 2, Designing a database: advanced topics, on page 23 for details about database design in general. For a query with a performance problem, ask yourself: v Would adding a column to an index allow the query to use index-only access? v Do you need a new index? v Is your choice of clustering index correct?
954
Administration Guide
v At most one IN-list predicate can be a matching predicate on an index. v For MX accesses and index access with list prefetch, IN-list predicates cannot be used as matching predicates. v Join predicates cannot qualify as matching predicates when doing a merge join (METHOD=2). For example, T1.C1=T2.C1 cannot be a matching predicate when doing a merge join, although any local predicates, such as C1=5 can be used. Join predicates can be used as matching predicates on the inner table of a nested loop join or hybrid join. Matching index scan example: Assume there is an index on T(C1,C2,C3,C4):
SELECT * FROM T WHERE C1=1 AND C2>1 AND C3=1;
Two matching columns occur in this example. The first one comes from the predicate C1=1, and the second one comes from C2>1. The range predicate on C2 prevents C3 from becoming a matching column.
Index screening
In index screening, predicates are specified on index key columns but are not part of the matching columns. Those predicates improve the index access by reducing the number of rows that qualify while searching the index. For example, with an index on T(C1,C2,C3,C4) in the following SQL statement, C3>0 and C4=2 are index screening predicates.
SELECT * FROM T WHERE C1 = 1 AND C3 > 0 AND C4 = 2 AND C5 = 8;
The predicates can be applied on the index, but they are not matching predicates. C5=8 is not an index screening predicate, and it must be evaluated when data is retrieved. The value of MATCHCOLS in the plan table is 1. EXPLAIN does not directly tell when an index is screened; however, if MATCHCOLS is less than the number of index key columns, it indicates that index screening is possible.
| |
955
You can regard the IN-list index scan as a series of matching index scans with the values in the IN predicate being used for each matching index scan. The following example has an index on (C1,C2,C3,C4) and might use an IN-list index scan:
SELECT * FROM T WHERE C1=1 AND C2 IN (1,2,3) AND C3>0 AND C4<100;
The plan table shows MATCHCOLS = 3 and ACCESSTYPE = N. The IN-list scan is performed as the following three matching index scans:
(C1=1,C2=1,C3>0), (C1=1,C2=2,C3>0), (C1=1,C2=3,C3>0)
Parallelism is supported for queries that involve IN-list index access. These queries used to run sequentially in previous releases of DB2, although parallelism could have been used when the IN-list access was for the inner table of a parallel group. Now, in environments in which parallelism is enabled, you can see a reduction in elapsed time for queries that involve IN-list index access for the outer table of a parallel group.
For this query: v EMP is a table with columns EMPNO, EMPNAME, DEPT, JOB, AGE, and SAL. v EMPX1 is an index on EMP with key column AGE. v EMPX2 is an index on EMP with key column JOB. The plan table contains a sequence of rows describing the access. For this query, ACCESSTYPE uses the following values: Value M MX MI MU Meaning Start of multiple index access processing Indexes are to be scanned for later union or intersection An intersection (AND) is performed A union (OR) is performed
The following steps relate to the previous query and the values shown for the plan table in Table 170 on page 957: # # # # 1. Index EMPX1, with matching predicate AGE = 40, provides a set of candidates for the result of the query. The value of MIXOPSEQ is 1. 2. Index EMPX2, with matching predicate JOB = MANAGER, also provides a set of candidates for the result of the query. The value of MIXOPSEQ is 2.
956
Administration Guide
# #
3. The first intersection (AND) is done, and the value of MIXOPSEQ is 3. This MI removes the two previous candidate lists (produced by MIXOPSEQs 2 and 3) by intersecting them to form an intermediate candidate list, IR1, which is not shown in PLAN_TABLE. 4. Index EMPX1, with matching predicate AGE = 34, also provides a set of candidates for the result of the query. The value of MIXOPSEQ is 4. 5. The last step, where the value MIXOPSEQ is 5, is a union (OR) of the two remaining candidate lists, which are IR1 and the candidate list produced by MIXOPSEQ 1. This final union gives the result for the query.
Table 170. Plan table output for a query that uses multiple indexes. Depending on the filter factors of the predicates, the access steps can appear in a different order. PLANNO 1 1 TNAME EMP EMP EMP EMP EMP EMP ACCESSTYPE M MX MX MI MX MU MATCHCOLS 0 1 1 0 1 0 EMPX1 EMPX1 EMPX2 ACCESSNAME PREFETCH L MIXOPSEQ 0 1 2 3 4 5
# #
1 1 1 1
# #
The multiple index steps are arranged in an order that uses RID pool storage most efficiently and for the least amount of time.
957
Index1 is a fully matching equal unique index. However, Index2 is also an equal unique index even though it is not fully matching. Index2 is the better choice because, in addition to being equal and unique, it also provides index-only access.
| | |
958
Administration Guide
Composite
TJ
TK
New
Composite
Work File
TL
New
DB2 performs the following steps to complete the join operation: 1. Accesses the first table (METHOD=0), named TJ (TNAME), which becomes the composite table in step 2. 2. Joins the new table TK to TJ, forming a new composite table. 3. Sorts the new table TL (SORTN_JOIN=Y) and the composite table (SORTC_JOIN=Y), and then joins the two sorted tables. 4. Sorts the final composite table (TNAME is blank) into the desired order (SORTC_ORDERBY=Y). Figure 108
Chapter 34. Using EXPLAIN to improve SQL performance
959
Table 171 and Table 172 show a subset of columns in a plan table for this join operation.
Table 171. Subset of columns for a two-step join operation METHOD 0 1 2 3 TNAME TJ TK TL ACCESSTYPE I I I MATCHCOLS 1 1 0 0 ACCESSNAME TJX1 TKX1 TLX1 INDEXONLY N N Y N TSLOCKMODE IS IS S
Table 172. Subset of columns for a two-step join operation SORTN UNIQ N N N N SORTN SORTN SORTN SORTC SORTC SORTC SORTC JOIN ORDERBY GROUPBY UNIQ JOIN ORDERBY GROUPBY N N Y N N N N N N N N N N N N N N N Y N N N N Y N N N N
Definitions: A join operation typically matches a row of one table with a row of another on the basis of a join condition. For example, the condition might specify that the value in column A of one table equals the value of column X in the other table (WHERE T1.A = T2.X). Two kinds of joins differ in what they do with rows in one table that do not match on the join condition with any row in the other table: v An inner join discards rows of either table that do not match any row of the other table. v An outer join keeps unmatched rows of one or the other table, or of both. A row in the composite table that results from an unmatched row is filled out with null values. As Table 173 shows, outer joins are distinguished by which unmatched rows they keep.
Table 173. Join types and kept unmatched rows Outer join type Left outer join Right outer join Full outer join Included unmatched rows The composite (outer) table The new (inner) table Both tables
Example: Suppose that you issue the following statement to explain an outer join:
EXPLAIN PLAN SET QUERYNO = 10 FOR SELECT PROJECT, COALESCE(PROJECTS.PROD#, PRODNUM) AS PRODNUM, PRODUCT, PART, UNITS FROM PROJECTS LEFT JOIN (SELECT PART, COALESCE(PARTS.PROD#, PRODUCTS.PROD#) AS PRODNUM, PRODUCTS.PRODUCT FROM PARTS FULL OUTER JOIN PRODUCTS ON PARTS.PROD# = PRODUCTS.PROD#) AS TEMP ON PROJECTS.PROD# = PRODNUM
960
Administration Guide
Table 174 shows a subset of the plan table for the outer join.
Table 174. Plan table output for an example with outer joins QUERYNO 10 10 10 10 QBLOCKNO 1 1 2 2 PLANNO 1 2 1 2 TNAME PROJECTS TEMP PRODUCTS PARTS F L JOIN_TYPE
Column JOIN_TYPE identifies the type of outer join with one of these values: v F for FULL OUTER JOIN v L for LEFT OUTER JOIN v Blank for INNER JOIN or no join At execution, DB2 converts every right outer join to a left outer join; thus JOIN_TYPE never identifies a right outer join specifically. Materialization with outer join: Sometimes DB2 has to materialize a result table when an outer join is used in conjunction with other joins, views, or nested table expressions. You can tell when this happens by looking at the TABLE_TYPE and TNAME columns of the plan table. When materialization occurs, TABLE_TYPE contains a W, and TNAME shows the name of the materialized table as DSNWFQB(xx), where xx is the number of the query block (QBLOCKNO) that produced the work file.
SELECT A, B, X, Y FROM (SELECT FROM OUTERT WHERE A=10) LEFT JOIN INNERT ON B=X; Left outer join using nested loop join Table Columns OUTERT A 10 10 10 10 10 B 3 1 2 6 1 X 5 3 2 1 2 9 7 INNERT Y A B C D E F G Composite A B X Y 10 10 10 10 10 10 3 1 2 2 6 1 3 1 2 2 1 B D C E D
find all matching rows in the inner table, by a table space or index scan.
The nested loop join produces this result, preserving the values of the outer table.
Method of joining
DB2 scans the composite (outer) table. For each row in that table that qualifies (by satisfying the predicates on that table), DB2 searches for matching rows of the new
961
(inner) table. It concatenates any it finds with the current row of the composite table. If no rows match the current row, then: v For an inner join, DB2 discards the current row. v For an outer join, DB2 concatenates a row of null values. Stage 1 and stage 2 predicates eliminate unqualified rows during the join. (For an explanation of those types of predicate, see Stage 1 and stage 2 predicates on page 757.) DB2 can scan either table using any of the available access methods, including table space scan.
Performance considerations
The nested loop join repetitively scans the inner table. That is, DB2 scans the outer table once, and scans the inner table as many times as the number of qualifying rows in the outer table. Therefore, the nested loop join is usually the most efficient join method when the values of the join column passed to the inner table are in sequence and the index on the join column of the inner table is clustered, or the number of rows retrieved in the inner table through the index is small.
| |
Example: Cartesian join with small tables first: A Cartesian join is a form of nested loop join in which there are no join predicates between the two tables. DB2 usually avoids a Cartesian join, but sometimes it is the most efficient method, as in the following example. The query uses three tables: T1 has 2 rows, T2 has 3 rows, and T3 has 10 million rows.
SELECT * FROM T1, T2, T3 WHERE T1.C1 = T3.C1 AND T2.C2 = T3.C2 AND T3.C3 = 5;
Join predicates are between T1 and T3 and between T2 and T3. There is no join predicate between T1 and T2. Assume that 5 million rows of T3 have the value C3=5. Processing time is large if T3 is the outer table of the join and tables T1 and T2 are accessed for each of 5 million rows.
962
Administration Guide
However if all rows from T1 and T2 are joined, without a join predicate, the 5 million rows are accessed only six times, once for each row in the Cartesian join of T1 and T2. It is difficult to say which access path is the most efficient. DB2 evaluates the different options and could decide to access the tables in the sequence T1, T2, T3. Sorting the composite table: Your plan table could show a nested loop join that includes a sort on the composite table. DB2 might sort the composite table (the outer table in Figure 109) if the following conditions exist: v The join columns in the composite table and the new table are not in the same sequence. v The join column of the composite table has no index. v The index is poorly clustered. Nested loop join with a sorted composite table has the following performance advantages: v Uses sequential detection efficiently to prefetch data pages of the new table, reducing the number of synchronous I/O operations and the elapsed time. v Avoids repetitive full probes of the inner table index by using the index look-aside.
Method of joining
Figure 110 illustrates a merge scan join.
SELECT A, B, X, Y FROM OUTER, INNER WHERE A=10 AND B=X; Merge scan join Condense and sort the outer table, or access it through an index on column B. Table Columns OUTER A 10 10 10 10 10 B 1 1 2 3 6 X 1 2 2 3 5 7 9 Condense and sort the inner table.
INNER Y D C E B A G F
Composite A B X Y 10 10 10 10 10 1 1 2 2 3 1 1 2 2 3 D D C E B
963
DB2 scans both tables in the order of the join columns. If no efficient indexes on the join columns provide the order, DB2 might sort the outer table, the inner table, or both. The inner table is put into a work file; the outer table is put into a work file only if it must be sorted. When a row of the outer table matches a row of the inner table, DB2 returns the combined rows. DB2 then reads another row of the inner table that might match the same row of the outer table and continues reading rows of the inner table as long as there is a match. When there is no longer a match, DB2 reads another row of the outer table. v If that row has the same value in the join column, DB2 reads again the matching group of records from the inner table. Thus, a group of duplicate records in the inner table is scanned as many times as there are matching records in the outer table. v If the outer row has a new value in the join column, DB2 searches ahead in the inner table. It can find any of the following rows: Unmatched rows in the inner table, with lower values in the join column. A new matching inner row. DB2 then starts the process again. An inner row with a higher value of the join column. Now the row of the outer table is unmatched. DB2 searches ahead in the outer table, and can find any of the following rows: - Unmatched rows in the outer table. - A new matching outer row. DB2 then starts the process again. - An outer row with a higher value of the join column. Now the row of the inner table is unmatched, and DB2 resumes searching the inner table. If DB2 finds an unmatched row: For an inner join, DB2 discards the row. For a left outer join, DB2 discards the row if it comes from the inner table and keeps it if it comes from the outer table. For a full outer join, DB2 keeps the row. When DB2 keeps an unmatched row from a table, it concatenates a set of null values as if that matched from the other table. A merge scan join must be used for a full outer join.
Performance considerations
A full outer join by this method uses all predicates in the ON clause to match the two tables and reads every row at the time of the join. Inner and left outer joins use only stage 1 predicates in the ON clause to match the tables. If your tables match on more than one column, it is generally more efficient to put all the predicates for the matches in the ON clause, rather than to leave some of them in the WHERE clause. For an inner join, DB2 can derive extra predicates for the inner table at bind time and apply them to the sorted outer table to be used at run time. The predicates can reduce the size of the work file needed for the inner table. If DB2 has used an efficient index on the join columns, to retrieve the rows of the inner table, those rows are already in sequence. DB2 puts the data directly into the work file without sorting the inner table, which reduces the elapsed time.
964
Administration Guide
v The qualifying rows of the inner and outer table are large, and the join predicate does not provide much filtering; that is, in a many-to-many join. v The tables are large and have no indexes with matching columns. v Few columns are selected on inner tables. This is the case when a DB2 sort is used. The fewer the columns to be sorted, the more efficient the sort is.
OUTER A B Index 10 10 10 10 10 1 1 2 3 6
RIDs P5 P2 P7 P4 P1 P6 P3
5
Composite table A B 2 3 1 1 2 X 2 3 1 1 2 Y Jones Brown Davis Davis Jones
X=B
List prefetch
10 10 10 10 10
965
Method of joining
The method requires obtaining RIDs in the order needed to use list prefetch. The steps are shown in Figure 111 on page 965. In that example, both the outer table (OUTER) and the inner table (INNER) have indexes on the join columns. DB2 performs the following steps: 1 2 Scans the outer table (OUTER). Joins the outer table with RIDs from the index on the inner table. The result is the phase 1 intermediate table. The index of the inner table is scanned for every row of the outer table. Sorts the data in the outer table and the RIDs, creating a sorted RID list and the phase 2 intermediate table. The sort is indicated by a value of Y in column SORTN_JOIN of the plan table. If the index on the inner table is a well-clustered index, DB2 can skip this sort; the value in SORTN_JOIN is then N. Retrieves the data from the inner table, using list prefetch. Concatenates the data from the inner table and the phase 2 intermediate table to create the final composite table.
4 5
PREFETCH=L
Performance considerations
Hybrid join uses list prefetch more efficiently than nested loop join, especially if there are indexes on the join predicate with low cluster ratios. It also processes duplicates more efficiently because the inner table is scanned only once for each set of duplicate values in the join column of the outer table. If the index on the inner table is highly clustered, there is no need to sort the intermediate table (SORTN_JOIN=N). The intermediate table is placed in a table in memory rather than in a work file.
966
Administration Guide
Dimension table
Dimension table
Dimension table
Dimension table
Figure 112. Star schema with a fact table and dimension tables
| | | | | |
Unlike the steps in the other join methods (nested loop join, merge scan join, and hybrid join) in which only two tables are joined in each step, a step in the star join method can involve three or more tables. Dimension tables are joined to the fact table via a multi-column index that is defined on the fact table. Therefore, having a well-defined, multi-column index on the fact table is critical for efficient star join processing.
967
quarter, and the year. The product table has an ID for each product item and its class and its inventory. The geographic location table has an ID for each city and its country. In this scenario, the sales table contains three columns with IDs from the dimension tables for time, product, and location instead of three columns for time, three columns for products, and two columns for location. Thus, the size of the fact table is greatly reduced. In addition, if you needed to change an item, you would do it once in a dimension table instead of several times for each instance of the item in the fact table. You can create even more complex star schemas by normalizing a dimension table into several tables. The normalized dimension table is called a snowflake. Only one of the tables in the snowflake joins directly wiht the fact table.
968
Administration Guide
Star join is enabled if the cardinality of the fact table is at least n times the cardinality of the largest dimension that is a base table that is joined to the fact table, where 2n32768.
| |
You can set the subsystem parameter STARJOIN by using the STAR JOIN QUERIES field on the DSNTIP8 installation panel. v The number of tables in the star schema query block, including the fact table, dimensions tables, and snowflake tables, meet the requirements that are specified by the value of subsystem parameter SJTABLES. The value of SJTABLES is considered only if the subsystem parameter STARJOIN qualifies the query for star join. The values of SJTABLES are: 1, 2, or 3 4 to 255 Star join is always considered. Star join is considered if the query block has at least the specified number of tables. If star join is enabled, 10 is the default value.
256 and greater Star join will never be considered. Star join, which can reduce bind time significantly, does not provide optimal performance in all cases. Performance of star join depends on a number of factors such as the available indexes on the fact table, the cluster ratio of the indexes, and the selectivity of rows through local and join predicates. Follow these general guidelines for setting the value of SJTABLES: If you have queries that reference less than 10 tables in a star schema database and you want to make the star join method applicable to all qualified queries, set the value of SJTABLES to the minimum number of tables used in queries that you want to be considered for star join. Example: Suppose that you query a star schema database that has one fact table and three dimension tables. You should set SJTABLES to 4. If you want to use star join for relatively large queries that reference a star schema database but are not necessarily suitable for star join, use the default. The star join method will be considered for all qualified queries that have 10 or more tables. If you have queries that reference a star schema database but, in general, do not want to use star join, consider setting SJTABLES to a higher number, such as 15, if you want to drastically cut the bind time for large queries and avoid a potential bind time SQL return code -101 for large qualified queries. For recommendations on indexes for star schemas, see Creating indexes for efficient star-join processing on page 801. Examples: query with three dimension tables: Suppose that you have a store in San Jose and want information about sales of audio equipment from that store in 2000. For this example, you want to join the following tables: v A fact table for SALES (S) v A dimension table for TIME (T) with columns for an ID, month, quarter, and year v A dimension table for geographic LOCATION (L) with columns for an ID, city, region, and country v A dimension table for PRODUCT (P) with columns for an ID, product item, class, and inventory You could write the following query to join the tables:
969
# # # # # # # #
SELECT * FROM SALES S, TIME T, PRODUCT P, LOCATION L WHERE S.TIME = T.ID AND S.PRODUCT = P.ID AND S.LOCATION = L.ID AND T.YEAR = 2000 AND P.CLASS = 'AUDIO' AND L.LOCATION = 'SAN JOSE';
All snowflakes are processed before the central part of the star join, as individual query blocks, and are materialized into work files. There is a work file for each snowflake. The EXPLAIN output identifies these work files by naming them DSN_DIM_TBLX(nn), where nn indicates the corresponding QBLOCKNO for the snowflake. This next example shows the plan for a star join that contains two snowflakes. Suppose that two new tables MANUFACTURER (M) and COUNTRY (C) are added to the tables in the previous example to break dimension tables PRODUCT (P) and LOCATION (L) into snowflakes: v The PRODUCT table has a new column MID that represents the manufacturer. v Table MANUFACTURER (M) has columns for MID and name to contain manufacturer information. v The LOCATION table has a new column CID that represents the country. v Table COUNTRY (C) has columns for CID and name to contain country information. You could write the following query to join all the tables:
SELECT * FROM SALES S, TIME T, PRODUCT P, MANUFACTURER M, LOCATION L, COUNTRY C WHERE S.TIME = T.ID AND S.PRODUCT = P.ID AND P.MID = M.MID AND S.LOCATION = L.ID AND L.CID = C.CID AND T.YEAR = 2000 AND M.NAME = 'some_company';
The joined table pairs (PRODUCT, MANUFACTURER) and (LOCATION, COUNTRY) are snowflakes. The EXPLAIN output of this query looks like Table 183 on page 971.
970
Administration Guide
Table 183. Plan table output for a star join example with snowflakes QUERYNO 1 1 1 1 1 1 1 1 QBLOCKNO 1 1 1 1 2 2 3 3 METHOD 0 1 1 1 0 1 0 4 TNAME TIME DSN_DIM_TBLX(02) SALES DSN_DIM_TBLX(03) PRODUCT MANUFACTURER LOCATION COUNTRY JOIN TYPE S S S Y SORTN JOIN Y Y ACCESS TYPE R R I T R I R I
Note: This query consists of three query blocks: v QBLOCKNO=1: The main star join block v QBLOCKNO=2: A snowflake (PRODUCT, MANUFACTURER) that is materialized into work file DSN_DIM_TBLX(02) v QBLOCKNO=3: A snowflake (LOCATION, COUNTRY) that is materialized into work file DSN_DIM_TBLX(03)
The joins in the snowflakes are processed first, and each snowflake is materialized into a work file. Therefore, when the main star join block (QBLOCKNO=1) is processed, it contains four tables: SALES (the fact table), TIME (a base dimension table), and the two snowflake work files. In this example, in the main star join block, the star join method is used for the first three tables (as indicated by S in the JOIN TYPE column of the plan table) and the remaining work file is joined by the nested loop join with sparse index access on the work file (as indicated by T in the ACCESSTYPE column for DSN_DIM_TBLX(3)). | | | | | | | | | | | | | | | | | | | |
971
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
relevant columns. Multiply these three values together to find the size of the data caching space for the work file, or the value of C. 4. Multiply (A) * (B) * (C) to determine the size of the pool in MB. The default virtual memory pool size is 20 MB. To set the pool size, use the SJMXPOOL parameter on the DSNTIP8 installation panel. Example: The following example shows how to determine the size of the virtual memory pool. Suppose that you issue the following star join query, where SALES is the fact table:
SELECT C.COUNTRY, P.PRDNAME, SUM(F.SPRICE) FROM SALES F, TIME T, PROD P, LOC L, SCOUN C WHERE F.TID = T.TID AND F.PID = P.PID AND F.LID = L.LID AND L.CID = C.CID AND P.PCODE IN (4, 7, 21, 22, 53) GROUP BY .COUNTRY, P.PRDNAME;
For this query, two work files can be cached in memory. These work files, PROD and DSN_DIM_TBLX(02), are indicated by ACCESSTYPE=T. To determine the size of the dedicated virtual memory pool, perform the following steps: 1. Determine the value of A. Estimate the number of star join queries that run concurrently. In this example, based on the type of operation, up to 12 star join queries are expected run concurrently. Therefore, A = 12. 2. Determine the value of B. Estimate the average number of work files that a star join query uses. In this example, the star join query uses two work files, PROD and DSN_DIM_TBLX(02). Therefore B = 2. 3. Determine the value of C. Estimate the number of work-file rows, the maximum length of the key, and the total of the maximum length of the relevant columns. Multiply these three values together to find the size of the data caching space for the work file, or the value of C. Both PROD and DSN_DIM_TBLX(02) are used to determine the value of C. Recommendation: Average the values for a representative sample of work files, and round the value up to determine an estimate for a value of C.
972
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v The number of work-file rows depends on the number of rows that match the predicate. For PROD, 87 rows are stored in the work file because 87 rows match the IN-list predicate. No selective predicate is used for DSN_DIM_TBLX(02), so the entire result of the join is stored in the work file. The work file for DSN_DIM_TBLX(02) holds 2800 rows. v The maximum length of the key depends on the data type definition of the tables key column. For PID, the key column for PROD, the maximum length is 4. DSN_DIM_TBLX(02) is a work file that results from the join of LOC and SCOUN. The key column that is used in the join is LID from the LOC table. The maximum length of LID is 4. v The maximum data length depends on the maximum length of the key column and the maximum length of the column that is selected as part of the star join. Add to the maximum data length 1 byte for nullable columns, 2 bytes for varying length columns, and 3 bytes for nullable and varying length columns. For the PROD work file, the maximum data length is the maximum length of PID, which is 4, plus the maximum length of PRDNAME, which is 24. Therefore, the maximum data length for the PROD work file is 28. For the DSN_DIM_TBLX(02) workfile, the maximum data length is the maximum length of LID, which is 4, plus the maximum length of COUNTRY, which is 36. Therefore, the maximum data length for the DSN_DIM_TBLX(02) work file is 40. For PROD, C = (87) * (4 + 28) = 2784 bytes. For DSN_DIM_TBLX(02), C = (2800) * (4 + 40) = 123200 bytes. The average of these two estimated values for C is approximately 62 KB. Because the number of rows in each work file can vary depending on the selection criteria in the predicate, the value of C should be rounded up to the nearest multiple of 100 KB. Therefore C = 100 KB. 4. Multiply (A) * (B) * (C) to determine the size of the pool in MB. The size of the pool is determined by multiplying (12) * (2) * (100KB) = 2.4 MB.
973
Table 185 shows the number pages read by prefetch for each asynchronous I/O.
Table 185. The number of pages read by prefetch, by buffer pool size Buffer pool size 4 KB Number of buffers <=223 buffers 224-999 buffers 1000+ buffers 8 KB <=112 buffers 113-499 buffers 500+ buffers 16 KB <=56 buffers 57-249 buffers 250+ buffers 32 KB <=16 buffers 17-99 buffers 100+ buffers Pages read by prefetch (for each asynchronous I/O) 8 pages 16 pages 32 pages 4 pages 8 pages 16 pages 2 pages 4 pages 8 pages 0 pages (prefetch disabled) 2 pages 4 pages
For certain utilities (LOAD, REORG, RECOVER), the prefetch quantity can be twice as much. When sequential prefetch is used: Sequential prefetch is generally used for a table space scan. For an index scan that accesses eight or more consecutive data pages, DB2 requests sequential prefetch at bind time. The index must have a cluster ratio of 80% or higher. Both data pages and index pages are prefetched. | | | | | | | | | |
974
Administration Guide
1. RID retrieval: A list of RIDs for needed data pages is found by matching index scans of one or more indexes. 2. RID sort: The list of RIDs is sorted in ascending order by page number. 3. Data retrieval: The needed data pages are prefetched in order using the sorted RID list. List prefetch does not preserve the data ordering given by the index. Because the RIDs are sorted in page number order before accessing the data, the data is not retrieved in order by any column. If the data must be ordered for an ORDER BY clause or any other reason, it requires an additional sort. In a hybrid join, if the index is highly clustered, the page numbers might not be sorted before accessing the data. List prefetch can be used with most matching predicates for an index scan. IN-list predicates are the exception; they cannot be the matching predicates when list prefetch is used.
| |
| |
975
976
Administration Guide
B RUN2 26 16
C RUN3 42 32
For initial data access sequential, prefetch is requested starting at page A for P pages (RUN1 and RUN2). The prefetch quantity is always P pages. For subsequent page requests where the page is 1) page sequential and 2) data access sequential is still in effect, prefetch is requested as follows: v If the desired page is in RUN1, no prefetch is triggered because it was already triggered when data access sequential was first declared. v If the desired page is in RUN2, prefetch for RUN3 is triggered and RUN2 becomes RUN1, RUN3 becomes RUN2, and RUN3 becomes the page range starting at C+P for a length of P pages. If a data access pattern develops such that data access sequential is no longer in effect and, thereafter, a new pattern develops that is sequential, then initial data access sequential is declared again and handled accordingly. Because, at bind time, the number of pages to be accessed can only be estimated, sequential detection acts as a safety net and is employed when the data is being accessed sequentially. In extreme situations, when certain buffer pool thresholds are reached, sequential prefetch can be disabled. See Buffer pool thresholds on page 673 for a description of these thresholds.
Sorts of data
After you run EXPLAIN, DB2 sorts are indicated in PLAN_TABLE. The sorts can be either sorts of the composite table or the new table. If a single row of PLAN_TABLE has a Y in more than one of the sort composite columns, then one sort accomplishes two things. (DB2 will not perform two sorts when two Ys are in the same row.) For instance, if both SORTC_ORDERBY and SORTC_UNIQ are Y in one row of PLAN_TABLE, then a single sort puts the rows in order and removes any duplicate rows as well. The only reason DB2 sorts the new table is for join processing, which is indicated by SORTN_JOIN.
977
The performance of the sort by the GROUP BY clause is improved when the query accesses a single table and when the GROUP BY column has no index.
Sorts of RIDs
To perform list prefetch, DB2 sorts RIDs into ascending page number order. This sort is very fast and is done totally in memory. A RID sort is usually not indicated in the PLAN_TABLE, but a RID sort normally is performed whenever list prefetch is used. The only exception to this rule is when a hybrid join is performed and a single, highly clustered index is used on the inner table. In this case SORTN_JOIN is N, indicating that the RID list for the inner table was not sorted.
978
Administration Guide
With parallelism: v At OPEN CURSOR, parallelism is asynchronously started, regardless of whether a sort is required. Control returns to the application immediately after the parallelism work is started. v If there is a RID sort, but no data sort, then parallelism is not started until the first fetch. This works the same way as with no parallelism.
Merge
The merge process is more efficient than materialization, as described in Performance of merge versus materialization on page 985. In the merge process, the statement that references the view or table expression is combined with the fullselect that defined the view or table expression. This combination creates a logically equivalent statement. This equivalent statement is executed against the database. Example: Consider the following statements, one of which defines a view, the other of which references the view:
View-defining statement: CREATE VIEW VIEW1 (VC1,VC21,VC32) AS SELECT C1,C2,C3 FROM T1 WHERE C1 > C3; View referencing statement: SELECT VC1,VC21 FROM VIEW1 WHERE VC1 IN (A,B,C);
The fullselect of the view-defining statement can be merged with the view-referencing statement to yield the following logically equivalent statement:
Merged statement: SELECT C1,C2 FROM T1 WHERE C1 > C3 AND C1 IN (A,B,C);
Example: The following statements show another example of when a view and table expression can be merged:
SELECT * FROM V1 X LEFT JOIN (SELECT * FROM T2) Y ON X.C1=Y.C1 LEFT JOIN T3 Z ON X.C1=Z.C1; Merged statement: SELECT * FROM V1 X LEFT JOIN T2 ON X.C1 = T2.C1 LEFT JOIN T3 Z ON X.C1 = Z.C1;
Chapter 34. Using EXPLAIN to improve SQL performance
979
Materialization
Views and table expressions cannot always be merged. Example: Look at the following statements:
View defining statement: CREATE VIEW VIEW1 (VC1,VC2) AS SELECT SUM(C1),C2 FROM T1 GROUP BY C2; View referencing statement: SELECT MAX(VC1) FROM VIEW1;
Column VC1 occurs as the argument of a aggregate function in the view referencing statement. The values of VC1, as defined by the view-defining fullselect, are the result of applying the aggregate function SUM(C1) to groups after grouping the base table T1 by column C2. No equivalent single SQL SELECT statement can be executed against the base table T1 to achieve the intended result. There is no way to specify that aggregate functions should be applied successively.
GROUP BY X X
DISTINCT X X X
UNION X X X
980
Administration Guide
Table 186. Cases when DB2 performs view or table expression materialization (continued). The X indicates a case of materialization. Notes follow the table. View definition or table expression uses...2 SELECT FROM view or table expression uses...1 Aggregate function Aggregate function DISTINCT SELECT subset of view or table expression columns Aggregate function X X Aggregate function DISTINCT X X UNION ALL(4) X
GROUP BY X X
DISTINCT X X X
UNION X X X
Notes to Table 186 on page 980: 1. If the view is referenced as the target of an INSERT, UPDATE, or DELETE, then view merge is used to satisfy the view reference. Only updatable views can be the target in these statements. See Chapter 5 of DB2 SQL Reference for information about which views are read-only (not updatable). An SQL statement can reference a particular view multiple times where some of the references can be merged and some must be materialized. 2. If a SELECT list contains a host variable in a table expression, then materialization occurs. For example:
SELECT C1 FROM (SELECT :HV1 AS C1 FROM T1) X;
If a view or nested table expression is defined to contain a user-defined function, and if that user-defined function is defined as NOT DETERMINISTIC or EXTERNAL ACTION, then the view or nested table expression is always materialized. 3. Additional details about materialization with outer joins: v If a WHERE clause exists in a view or table expression, and it does not contain a column, materialization occurs. Example:
SELECT X.C1 FROM (SELECT C1 FROM T1 WHERE 1=1) X LEFT JOIN T2 Y ON X.C1=Y.C1;
v If the outer join is a full outer join and the SELECT list of the view or nested table expression does not contain a standalone column for the column that is used in the outer join ON clause, then materialization occurs. Example:
SELECT X.C1 FROM (SELECT C1+10 AS C2 FROM T1) X FULL JOIN T2 Y ON X.C2=Y.C2;
v If there is no column in a SELECT list of a view or nested table expression, materialization occurs. Example:
SELECT X.C1 FROM (SELECT 1+2+:HV1. AS C1 FROM T1) X LEFT JOIN T2 Y ON X.C1=Y.C1;
981
v If the SELECT list of a view or nested table expression contains a CASE expression, and the result of the CASE expression is referenced in the outer query block, then materialization occurs. Example:
SELECT X.C1 FROM T1 X LEFT JOIN (SELECT CASE C2 WHEN 5 THEN 10 ELSE 20 END AS YC1 FROM T2) Y ON X.C1 = Y.YC1;
4. DB2 cannot avoid materialization for UNION ALL in all cases. Some of the situations in which materialization occurs includes: v When the view is the operand in an outer join for which nulls are used for non-matching values, materialization occurs. This situation happens when the view is either operand in a full outer join, the right operand in a left outer join, or the left operand in a right outer join. v If the number of tables would exceed 225 after distribution, then distribution will not occur, and the result will be materialized.
Table 187 shows a subset of columns in a plan table for the query.
Table 187. Plan table output for an example with view materialization QBLOCKNO PLANNO 1 2 2 1 1 2 QBLOCK_ TYPE SELECT NOCOSUB NOCOSUB TNAME DEPT V1DIS TABLE_ TYPE T W ? METHOD 0 0 3
982
Administration Guide
Table 187. Plan table output for an example with view materialization (continued) QBLOCKNO PLANNO 3 3 1 2 QBLOCK_ TYPE NOCOSUB NOCOSUB TNAME EMP TABLE_ TYPE T ? METHOD 0 3
Notice how TNAME contains the name of the view and TABLE_TYPE contains W to indicate that DB2 chooses materialization for the reference to the view because of the use of SELECT DISTINCT in the view definition. Example: Consider the following statements, which define a view and reference the view:
View defining statement: CREATE VIEW V1NODIS (SALARY, WORKDEPT) as (SELECT SALARY, WORKDEPT FROM DSN8810.EMP) View referencing statement: SELECT * FROM DSN8810.DEPT WHERE DEPTNO IN (SELECT WORKDEPT FROM V1NODIS)
If the VIEW was defined without DISTINCT, DB2 would choose merge instead of materialization. In the sample output, the name of the view does not appear in the plan table, but the table name on which the view is based does appear. Table 188 shows a sample plan table for the query.
Table 188. Plan table output for an example with view merge QBLOCKNO PLANNO 1 2 2 1 1 2 QBLOCK_ TYPE SELECT NOCOSUB NOCOSUB TNAME DEPT EMP TABLE_ TYPE T T ? METHOD 0 0 3
For an example of when a view definition contains a UNION ALL and DB2 can distribute joins and aggregations and avoid materialization, see Using EXPLAIN to determine UNION activity and query rewrite. When DB2 avoids materialization in such cases, TABLE_TYPE contains a Q to indicate that DB2 uses an intermediate result that is not materialized, and TNAME shows the name of this intermediate result as DSNWFQB(xx), where xx is the number of the query block that produced the result.
983
The QBLOCK_TYPE column in the plan table indicates union activity. For a UNION ALL, the column contains UNIONA. For UNION, the column contains UNION. When QBLOCK_TYPE=UNION, the METHOD column on the same row is set to 3 and the SORTC_UNIQ column is set to Y to indicate that a sort is necessary to remove duplicates. As with other views and table expressions, the plan table also shows when DB2 uses materialization instead of merge. Example: Consider the following statements, which define a view, reference the view, and show how DB2 rewrites the referencing statement:
View defining statement: View is created on three tables that contain weekly data CREATE VIEW V1 (CUSTNO, CHARGES, DATE) as SELECT CUSTNO, CHARGES, DATE FROM WEEK1 WHERE DATE BETWEEN '01/01/2000' And '01/07/2000' UNION ALL SELECT CUSTNO, CHARGES, DATE FROM WEEK2 WHERE DATE BETWEEN '01/08/2000' And '01/14/2000' UNION ALL SELECT CUSTNO, CHARGES, DATE FROM WEEK3 WHERE DATE BETWEEN '01/15/2000' And '01/21/2000'; View referencing statement: For each customer in California, find the average charges during the first and third Friday of January 2000 SELECT V1.CUSTNO, AVG(V1.CHARGES) FROM CUST, V1 WHERE CUST.CUSTNO=V1.CUSTNO AND CUST.STATE='CA' AND DATE IN ('01/07/2000','01/21/2000') GROUP BY V1.CUSTNO; Rewritten statement (assuming that CHARGES is defined as NOT NULL): SELECT CUSTNO_U, SUM(SUM_U)/SUM(CNT_U) FROM ( SELECT WEEK1.CUSTNO, SUM(CHARGES), COUNT(CHARGES) FROM CUST, WEEK1 Where CUST.CUSTNO=WEEK1.CUSTNO AND CUST.STATE='CA' AND DATE BETWEEN '01/01/2000' And '01/07/2000' AND DATE IN ('01/07/2000','01/21/2000') GROUP BY WEEK1.CUSTNO UNION ALL SELECT WEEK3.CUSNTO, SUM(CHARGES), COUNT(CHARGES) FROM CUST,WEEK3 WHERE CUST.CUSTNO=WEEK3 AND CUST.STATE='CA' AND DATE BETWEEN '01/15/2000' And '01/21/2000' AND DATE IN ('01/07/2000','01/21/2000') GROUP BY WEEK3.CUSTNO ) AS X(CUSTNO_U,SUM_U,CNT_U) GROUP BY CUSNTO_U;
Table 189 shows a subset of columns in a plan table for the query.
Table 189. Plan table output for an example with a view with UNION ALLs QBLOCKNO 1 1 2 3 PLANNO 1 2 1 1 CUST TNAME DSNWFQB(02) TABLE_TYPE Q ? ? T METHOD 0 3 0 0 UNIONA QBLOCK_ TYPE PARENT_ QBLOCKNO 0 0 1 2
984
Administration Guide
Table 189. Plan table output for an example with a view with UNION ALLs (continued) QBLOCKNO 3 4 4 PLANNO 2 1 2 TNAME WEEK1 CUST WEEK3 TABLE_TYPE T T T METHOD 1 0 2 QBLOCK_ TYPE PARENT_ QBLOCKNO 2 2 2
Notice how DB2 eliminates the second subselect of the view definition from the rewritten query and how the plan table indicates this removal by showing a UNION ALL for only the first and third subselect in the view definition. The Q in the TABLE_TYPE column indicates that DB2 does not materialize the view.
COL IN (list)
Note: Where op is =, <>, >, <, <=, or >=, and literal is either a host variable, constant, or special register. The literals in the BETWEEN predicate need not be identical.
Implied predicates generated through predicate transitive closure are also considered for first step evaluation.
985
adequate information to make an estimate. That estimate is not likely to be 100% accurate, but is likely to be more accurate than any estimate that is in cost category B. DB2 puts estimates into cost category B when it is forced to use default values for its estimates, such as when no statistics are available, or because host variables are used in a query. See the description of the REASON column in Table 191 on page 987 for more information about how DB2 determines into which cost category an estimate goes. v Give a system programmer a basis for entering service-unit values by which to govern dynamic statements. Information about using predictive governing is in Predictive governing on page 730. This section describes the following tasks to obtain and use cost estimate information from EXPLAIN: 1. Creating a statement table 2. Populating and maintaining a statement table on page 988 3. Retrieving rows from a statement table on page 988 4. The implications of cost categories on page 989 See Part 6 of DB2 Application Programming and SQL Guide for more information about how to change applications to handle the SQLCODES that are associated with predictive governing.
| |
CREATE TABLE DSN_STATEMNT_TABLE ( QUERYNO INTEGER NOT NULL WITH DEFAULT, APPLNAME CHAR(8) NOT NULL WITH DEFAULT, PROGNAME VARCHAR(128) NOT NULL WITH DEFAULT, COLLID VARCHAR(128) NOT NULL WITH DEFAULT, GROUP_MEMBER CHAR(8) NOT NULL WITH DEFAULT, EXPLAIN_TIME TIMESTAMP NOT NULL WITH DEFAULT, STMT_TYPE CHAR(6) NOT NULL WITH DEFAULT, COST_CATEGORY CHAR(1) NOT NULL WITH DEFAULT, PROCMS INTEGER NOT NULL WITH DEFAULT, PROCSU INTEGER NOT NULL WITH DEFAULT, REASON VARCHAR(254) NOT NULL WITH DEFAULT STMT_ENCODE CHAR(1) NOT NULL WITH DEFAULT); Figure 114. Current format of DSN_STATEMNT_TABLE
| |
Your statement table can use an older format in which the STMT_ENCODE column does not exist, PROGNAME has a data type of CHAR(8), and COLLID has
986
Administration Guide
| | |
a data type of CHAR(18). However, use the most current format because it gives you the most information. You can alter a statement table in the older format to a statement table in the current format. Table 191 shows the content of each column.
Table 191. Descriptions of columns in DSN_STATEMNT_TABLE Column Name QUERYNO Description A number that identifies the statement being explained. See the description of the QUERYNO column in Table 165 on page 936 for more information. If QUERYNO is not unique, the value of EXPLAIN_TIME is unique. The name of the application plan for the row, or blank. See the description of the APPLNAME column in Table 165 on page 936 for more information. The name of the program or package containing the statement being explained, or blank. See the description of the PROGNAME column in Table 165 on page 936 for more information. The collection ID for the package. Applies only to an embedded EXPLAIN statement executed from a package or to a statement that is explained when binding a package. Blank if not applicable. The value DSNDYNAMICSQLCACHE indicates that the row is for a cached statement. The member name of the DB2 that executed EXPLAIN, or blank. See the description of the GROUP_MEMBER column in Table 165 on page 936 for more information. The time at which the statement is processed. This time is the same as the BIND_TIME column in PLAN_TABLE. The type of statement being explained. Possible values are: SELECT INSERT UPDATE DELETE SELUPD DELCUR UPDCUR COST_CATEGORY SELECT INSERT UPDATE DELETE SELECT with FOR UPDATE OF DELETE WHERE CURRENT OF CURSOR UPDATE WHERE CURRENT OF CURSOR
APPLNAME PROGNAME
| | | |
COLLID
Indicates if DB2 was forced to use default values when making its estimates. Possible values: A B Indicates that DB2 had enough information to make a cost estimate without using default values. Indicates that some condition exists for which DB2 was forced to use default values. See the values in REASON to determine why DB2 was unable to put this estimate in cost category A.
PROCMS
The estimated processor cost, in milliseconds, for the SQL statement. The estimate is rounded up to the next integer value. The maximum value for this cost is 2147483647 milliseconds, which is equivalent to approximately 24.8 days. If the estimated value exceeds this maximum, the maximum value is reported. The estimated processor cost, in service units, for the SQL statement. The estimate is rounded up to the next integer value. The maximum value for this cost is 2147483647 service units. If the estimated value exceeds this maximum, the maximum value is reported.
PROCSU
987
Table 191. Descriptions of columns in DSN_STATEMNT_TABLE (continued) Column Name REASON Description A string that indicates the reasons for putting an estimate into cost category B. HAVING CLAUSE HOST VARIABLES A subselect in the SQL statement contains a HAVING clause. The statement uses host variables, parameter markers, or special registers.
REFERENTIAL CONSTRAINTS Referential constraints of the type CASCADE or SET NULL exist on the target table of a DELETE statement. TABLE CARDINALITY The cardinality statistics are missing for one or more of the tables that are used in the statement. Or, the statement required the materialization of views or nested table expressions. Triggers are defined on the target table of an INSERT, UPDATE, or DELETE statement. The statement uses user-defined functions.
| |
TRIGGERS UDF
| STMT_ENCODE | | | | |
Encoding scheme of the statement. If the statement repesents a single CCSID set, the possible values are: A ASCII E EBCDIC U Unicode If the statement has multiple CCSID sets, the value is M.
The QUERYNO, APPLNAME, PROGNAME, COLLID, and EXPLAIN_TIME columns contain the same values as corresponding columns of PLAN_TABLE for a given plan. You can use these columns to join the plan table and statement table:
SELECT A.*, PROCMS, COST_CATEGORY FROM JOE.PLAN_TABLE A, JOE.DSN_STATEMNT_TABLE B WHERE A.APPLNAME = 'APPL1' AND A.APPLNAME = B.APPLNAME AND
988
Administration Guide
A.QUERYNO = B.QUERYNO AND A.PROGNAME = B.PROGNAME AND A.COLLID = B.COLLID AND A.BIND_TIME = B.EXPLAIN_TIME ORDER BY A.QUERYNO, A.QBLOCKNO, A.PLANNO, A.MIXOPSEQ;
989
990
Administration Guide
991
P2R1
P2R1 P2R2
P2R2 P2R3
P2R3
P3R1
P3R1 P3R2
Figure 116 shows parallel I/O operations. With parallel I/O, DB2 prefetches data from the 3 partitions at one time. The processor processes the first request from each partition, then the second request from each partition, and so on. The processor is not waiting for I/O, but there is still only one processing task.
CP processing: P1R1 P2R1 P3R1 P1R2 P2R2 P3R2 P1R3 I/O: P1 P2 P3 Time line R1 R1 R1 R2 R2 R2 R3 R3 R3
Figure 117 on page 993 shows parallel CP processing. With CP parallelism, DB2 can use multiple parallel tasks to process the query. Three tasks working concurrently can greatly reduce the overall elapsed time for data-intensive and processor-intensive queries. The same principle applies for Sysplex query parallelism, except that the work can cross the boundaries of a single CPC.
992
Administration Guide
CP task 1: P1R1 I/O: P1R1 CP task 2: P2R1 I/O: P2R1 CP task 3: I/O: P3R1 Time line P3R2 P3R3 P2R2 P2R3 P2R2 P2R3 P1R2 P1R2 P1R3 P1R3
P3R3
P3R1
P3R2
Figure 117. CP and I/O processing techniques. Query processing using CP parallelism. The tasks can be contained within a single CPC or can be spread out among the members of a data sharing group.
Queries that are most likely to take advantage of parallel operations: Queries that can take advantage of parallel processing are: v Those in which DB2 spends most of the time fetching pagesan I/O-intensive query A typical I/O-intensive query is something like the following query, assuming that a table space scan is used on many pages:
SELECT COUNT(*) FROM ACCOUNTS WHERE BALANCE > 0 AND DAYS_OVERDUE > 30;
v Those in which DB2 spends a lot of processor time and also, perhaps, I/O time, to process rows. Those include: Queries with intensive data scans and high selectivity. Those queries involve large volumes of data to be scanned but relatively few rows that meet the search criteria. Queries containing aggregate functions. Column functions (such as MIN, MAX, SUM, AVG, and COUNT) usually involve large amounts of data to be scanned but return only a single aggregate result. Queries accessing long data rows. Those queries access tables with long data rows, and the ratio of rows per page is very low (one row per page, for example). Queries requiring large amounts of central processor time. Those queries might be read-only queries that are complex, data-intensive, or that involve a sort. A typical processor-intensive query is something like:
SELECT MAX(QTY_ON_HAND) AS MAX_ON_HAND, AVG(PRICE) AS AVG_PRICE, AVG(DISCOUNTED_PRICE) AS DISC_PRICE, SUM(TAX) AS SUM_TAX, SUM(QTY_SOLD) AS SUM_QTY_SOLD, SUM(QTY_ON_HAND - QTY_BROKEN) AS QTY_GOOD, AVG(DISCOUNT) AS AVG_DISCOUNT, ORDERSTATUS, COUNT(*) AS COUNT_ORDERS FROM ORDER_TABLE
993
WHERE SHIPPER = 'OVERNIGHT' AND SHIP_DATE < DATE('1996-01-01') GROUP BY ORDERSTATUS ORDER BY ORDERSTATUS;
Terminology: When the term task is used with information about parallel processing, the context should be considered. For parallel query CP processing or Sysplex query parallelism, a task is an actual z/OS execution unit used to process a query. For parallel I/O processing, a task simply refers to the processing of one of the concurrent I/O streams. A parallel group is the term used to name a particular set of parallel operations (parallel tasks or parallel I/O operations). A query can have more than one parallel group, but each parallel group within the query is identified by its own unique ID number. The degree of parallelism is the number of parallel tasks or I/O operations that DB2 determines can be used for the operations on the parallel group. The maximum of parallel operations that DB2 can generate is 254. However, for most queries and DB2 environments, DB2 chooses a lower number. You might need to limit the maximum number further because more parallel operations consume processor, real storage, and I/O resources. If resource consumption in high in your parallelism environment, use the MAX DEGREE field on installation panel DSNTIP4 to explicitly limit the maximum number of parallel operations that DB2 generates, as explain in Enabling parallel processing on page 997. In a parallel group, an originating task is the TCB (SRB for distributed requests) that coordinates the work of all the parallel tasks. Parallel tasks are executable units composed of special SRBs, which are called preemptable SRBs. With preemptable SRBs, the z/OS dispatcher can interrupt a task at any time to run other work at the same or higher dispatching priority. For non-distributed parallel work, parallel tasks run under a type of preemptable SRB called a client SRB, which lets the parallel task inherit the importance of the originating address space. For distributed requests, the parallel tasks run under a preemptable SRB called an enclave SRB. Enclave SRBs are described more fully in Using z/OS workload management to set performance objectives on page 745.
| | | | | | | |
994
Administration Guide
This section guides you through the following analyses: 1. Determining the nature of the query (what balance of processing and I/O resources it needs) 2. Determining how many partitions the table space should have to meet your performance objective, number based on the nature of the query and on the processor and I/O configuration at your site
| | |
995
By partitioning the amount indicated previosuly, the query is brought into balance by reducing the I/O wait time. If the number of partitions is less than the number of CPs available on your system, increase this number close to the number of CPs available. By doing so, other queries that read this same table, but that are more processor-intensive, can take advantage of the additional processing power. Example: Suppose that you have a 10-way CPC and the calculated number of partitions is five. Instead of limiting the table space to five partitions, use 10, to equal the number of CPs in the CPC. Example configurations for an I/O-intensive query: If the I/O cost of your queries is about twice as much as the processing cost, the optimal number of partitions when run on a 10-way processor is 20 (2 * number of processors). Figure 118 shows an I/O configuration that minimizes the elapsed time and allows the CPC to run at 100% busy. It assumes the suggested guideline of four devices per control unit and four channels per control unit.7
10-way CPC ESCON channels (20) ESCON director Device data paths Storage control units
Disk
Figure 118. I/O configuration that maximizes performance for an I/O-intensive query
7. A lower-cost configuration could use as few as two to three channels per control unit shared among all controllers using an ESCON director. However, using four paths minimizes contention and provides the best performance. Paths might also need to be taken offline for service.
996
Administration Guide
ten logical work ranges might be created. This example would result in a degree of parallelism of 10 and reduced elapsed time. DB2 tries to create equal work ranges by dividing the total cost of running the work by the logical partition cost. This division often has some left over work. In this case, DB2 creates an additional task to handle the extra work, rather than making all the work ranges larger, which would reduce the degree of parallelism. | | | | | | | | | | | To rebalance partitions that have become skewed, reorganize the table space, specifying the REBALANCE keyword on the REORG utility statement.
You can also change the special register default from 1 to ANY for the entire DB2 subsystem by modifying the CURRENT DEGREE field on installation panel DSNTIP4. v If you bind with isolation CS, choose also the option CURRENTDATA(NO), if possible. This option can improve performance in general, but it also ensures that DB2 will consider parallelism for ambiguous cursors. If you bind with CURRENDATA(YES) and DB2 cannot tell if the cursor is read-only, DB2 does not consider parallelism. When a cursor is read-only, it is recommended that you specify the FOR FETCH ONLY or FOR READ ONLY clause on the DECLARE CURSOR statement to explicitly indicate that the cursor is read-only. v The virtual buffer pool parallel sequential threshold (VPPSEQT) value must be large enough to provide adequate buffer pool space for parallel processing. For more information about VPPSEQT, see Buffer pool thresholds on page 673. If you enable parallel processing when DB2 estimates a given querys I/O and central processor cost is high, multiple parallel tasks can be activated if DB2 estimates that elapsed time can be reduced by doing so. | | Recommendation: For parallel sorts, allocate sufficient work files to maintain performance.
997
Special requirements for CP parallelism: DB2 must be running on a central processor complex that contains two or more tightly coupled processors (sometimes called central processors, or CPs). If only one CP is online when the query is bound, DB2 considers only parallel I/O operations. DB2 also considers only parallel I/O operations if you declare a cursor WITH HOLD and bind with isolation RR or RS. For more restrictions on parallelism, see Table 192. For complex queries, run the query in parallel within a member of a data sharing group. With Sysplex query parallelism, use the power of the data sharing group to process individual complex queries on many members of the data sharing group. For more information about how you can use the power of the data sharing group to run complex queries, see Chapter 6 of DB2 Data Sharing: Planning and Administration. Limiting the degree of parallelism: If you want to limit the maximum number of parallel tasks that DB2 generates, you can use the MAX DEGREE field on installation panel DSNTIP4. Changing MAX DEGREE, however, is not the way to turn parallelism off. You use the DEGREE bind parameter or CURRENT DEGREE special register to turn parallelism off.
Yes Yes No No
Yes Yes No No
No Yes
No Yes
No No
DB2 avoids certain hybrid joins when parallelism is enabled: To ensure that you can take advantage of parallelism, DB2 does not pick one type of hybrid join
998
Administration Guide
(SORTN_JOIN=Y) when the plan or package is bound with CURRENT DEGREE=ANY or if the CURRENT DEGREE special register is set to ANY.
999
Table 193. Part of PLAN_TABLE for single table access ACCESS_ DEGREE 3 ACCESS_ PGROUP_ ID 1 JOIN_ DEGREE (null) JOIN_ PGROUP_ ID (null) SORTC_ PGROUP_ ID (null) SORTN_ PGROUP_ ID (null)
TNAME T1
METHOD 0
| | |
v Example 2: nested loop join Consider a query that results in a series of nested loop joins for three tables, T1, T2 and T3. T1 is the outermost table, and T3 is the innermost table. DB2 decides at bind time to initiate three concurrent requests to retrieve data from each of the three tables. Each request accesses part of T1 and all of T2 and T3. For the nested loop join method with sort, all the retrievals are in the same parallel group except for star join with ACCESSTYPE=T (sparse index). Part of PLAN_TABLE appears as shown in Table 194:
Table 194. Part of PLAN_TABLE for a nested loop join ACCESS_ DEGREE 3 3 3 ACCESS_ PGROUP_ ID 1 1 1 JOIN_ DEGREE (null) 3 3 JOIN_ PGROUP_ ID (null) 1 1 SORTC_ PGROUP_ ID (null) (null) (null) SORTN_ PGROUP_ ID (null) (null) (null)
TNAME T1 T2 T3
METHOD 0 1 1
v Example 3: merge scan join Consider a query that causes a merge scan join between two tables, T1 and T2. DB2 decides at bind time to initiate three concurrent requests for T1 and six concurrent requests for T2. The scan and sort of T1 occurs in one parallel group. The scan and sort of T2 occurs in another parallel group. Furthermore, the merging phase can potentially be done in parallel. Here, a third parallel group is used to initiate three concurrent requests on each intermediate sorted table. Part of PLAN_TABLE appears as shown in Table 195:
Table 195. Part of PLAN_TABLE for a merge scan join ACCESS_ METHOD DEGREE 0 2 3 6 ACCESS_ PGROUP_ ID d 2 JOIN_ DEGREE (null) 3 JOIN_ PGROUP_ ID (null) 3 SORTC_ PGROUP_ ID d d SORTN_ PGROUP_ ID (null) d
TNAME T1 T2
| | | | | | | | |
In a multi-table join, DB2 might also execute the sort for a composite that involves more than one table in a parallel task. DB2 uses a cost basis model to determine whether to use parallel sort in all cases. When DB2 decides to use parallel sort, SORTC_PGROUP_ID and SORTN_PGROUP_ID indicate the parallel group identifier. Consider a query that joins three tables, T1, T2, and T3, and uses a merge scan join between T1 and T2, and then between the composite and T3. If DB2 decides, based on the cost model, that all sorts in this query are to be performed in parallel, part of PLAN_TABLE appears as shown in Table 196 on page 1001:
1000
Administration Guide
| | | | | | | | |
Table 196. Part of PLAN_TABLE for a multi-table, merge scan join ACCESS_ METHOD DEGREE 0 2 2 3 6 6 ACCESS_ PGROUP_ ID 1 2 4 JOIN_ DEGREE (null) 6 6 JOIN_ PGROUP_ ID (null) 3 5 SORTC_ PGROUP_ ID (null) 1 3 SORTN_ PGROUP_ ID (null) 2 4
TNAME T1 T2 T3
v Example 4: hybrid join Consider a query that results in a hybrid join between two tables, T1 and T2. Furthermore, T1 needs to be sorted; as a result, in PLAN_TABLE the T2 row has SORTC_JOIN=Y. DB2 decides at bind time to initiate three concurrent requests for T1 and six concurrent requests for T2. Parallel operations are used for a join through a clustered index of T2. Because T2s RIDs can be retrieved by initiating concurrent requests on the clustered index, the joining phase is a parallel step. The retrieval of T2s RIDs and T2s rows are in the same parallel group. Part of PLAN_TABLE appears as shown in Table 197:
Table 197. Part of PLAN_TABLE for a hybrid join ACCESS_ DEGREE 3 6 ACCESS_ PGROUP_ ID 1 2 JOIN_ DEGREE (null) 6 JOIN_ PGROUP_ ID (null) 2 SORTC_ PGROUP_ ID (null) 1 SORTN_ PGROUP_ ID (null) (null)
TNAME T1 T2
METHOD 0 4
1001
Execution time: For each parallel group, parallelism (either CP or I/O) can execute at a reduced degree or degrade to sequential operations for the following reasons: v Amount of virtual buffer pool space available v Host variable values v Availability of the hardware sort assist facility v Ambiguous cursors v A change in the number or configuration of online processors v The join technique that DB2 uses (I/O parallelism not supported when DB2 uses the star join technique) At execution time, a plan using Sysplex query parallelism can use CP parallelism. All parallelism modes can degenerate to a sequential plan. No other changes are possible.
The PARALLEL REQUEST field in this example shows that DB2 was negotiating buffer pool resource for 282 parallel groups. Of those 282 groups, only 5 were degraded because of a lack of buffer pool resource. A large number in the DEGRADED PARALLEL field could indicate that there are not enough buffers that can be used for parallel processing.
Accounting trace
By default, DB2 rolls task accounting into an accounting record for the originating task. OMEGAMON also summarizes all accounting records generated for a parallel query and presents them as one logical accounting record. OMEGAMON presents the times for the originating tasks separately from the accumulated times for all the parallel tasks. As shown in Figure 119 on page 1003 CPU TIME-AGENT is the time for the originating tasks, while CPU TIME-PAR.TASKS ( A ) is the accumulated processing time for the parallel tasks.
1002
Administration Guide
TIMES/EVENTS -----------ELAPSED TIME NONNESTED STORED PROC UDF TRIGGER CPU TIME AGENT NONNESTED STORED PRC UDF TRIGGER PAR.TASKS . . . ...
APPL (CLASS 1) -------------32.578741 28.820003 3.758738 0.000000 0.000000 1:29.695300 0.225153 0.132351 0.092802 0.000000 0.000000 A 1:29.470147
DB2 (CLASS 2) -------------32.312218 30.225885 2.086333 0.000000 0.000000 1:29.644026 0.178128 0.088834 0.089294 0.000000 0.000000 1:29.465898
CLASS 3 SUSP. ELAPSED TIME -------------- -----------LOCK/LATCH 25.461371 SYNCHRON. I/O 0.142382 DATABASE I/O 0.116320 LOG WRTE I/O 0.026062 OTHER READ I/O 3:00.404769 OTHER WRTE I/O 0.000000 SER.TASK SWTCH 0.000000 UPDATE COMMIT 0.000000 OPEN/CLOSE 0.000000 SYSLGRNG REC 0.000000 EXT/DEL/DEF 0.000000 OTHER SERVICE 0.000000 ARC.LOG(QUIES) 0.000000
QUERY PARALLEL. TOTAL --------------- -------MAXIMUM MEMBERS 1 MAXIMUM DEGREE 10 GROUPS EXECUTED 1 RAN AS PLANNED B 1 RAN REDUCED C 0 ONE DB2 COOR=N 0 ONE DB2 ISOLAT 0 ONE DB2 DCL TTABLE 0 SEQ - CURSOR D 0 SEQ - NO ESA E 0 SEQ - NO BUF F 0 SEQ - ENCL.SER. 0 MEMB SKIPPED(%) 0 DISABLED BY RLF G NO REFORM PARAL-CONFIG H 0 REFORM PARAL-NO BUF 0
As the report shows, the values for CPU TIME and I/O WAIT TIME are larger than the elapsed time. The processor and suspension time can be greater than the elapsed time because these two times are accumulated from multiple parallel tasks. The elapsed time would be less than the processor and suspension time if these two times are accumulated sequentially. If you have baseline accounting data for the same thread run without parallelism, the elapsed times and processor times should not be significantly larger when that query is run in parallel. If it is significantly larger, or if response time is poor, you need to examine the accounting data for the individual tasks. Use the OMEGAMON Record Trace for the IFCID 0003 records of the thread you want to examine. Use the performance trace if you need more information to determine the cause of the response time problem.
Performance trace
The performance trace can give you information about tasks within a group. To determine the actual number of parallel tasks used, refer to field QW0221AD in IFCID 0221, as mapped by macro DSNDQW03. The 0221 record also gives you information about the key ranges used to partition the data. IFCID 0222 contains the elapsed time information for each parallel task and each parallel group in each SQL query. OMEGAMON presents this information in its SQL Activity trace.
1003
If your queries are running sequentially or at a reduced degree because of a lack of buffer pool resources, the QW0221C field of IFCID 0221 indicates which buffer pool is constrained.
QBSTJIS is the total number of requested prefetch I/O streams that were denied because of a storage shortage in the buffer pool. (There is one I/O stream per parallel task.) QBSTPQF is the total number of times that DB2 could not allocate enough buffer pages to allow a parallel group to run to the planned degree. As an example, assume QBSTJIS is 100 000 and QBSTPQF is 2500:
(100000 2500) 32 = 1280
Use ALTER BUFFERPOOL to increase the current VPSIZE by 2560 buffers to alleviate the degree degradation problem. Use the DISPLAY BUFFERPOOL command to see the current VPSIZE. v Physical contention As much as possible, put data partitions on separate physical devices to minimize contention. Try not to use more partitions than there are internal paths in the controller. v Run time host variables
1004
Administration Guide
A host variable can determine the qualifying partitions of a table for a given query. In such cases, DB2 defers the determination of the planned degree of parallelism until run time, when the host variable value is known. v Updatable cursor At run time, DB2 might determine that an ambiguous cursor is updatable. This appears in D in the accounting report. v Proper hardware and software support If you do not have the hardware sort facility at run time, and a sort merge join is needed, you see a value in E . v A change in the configuration of online processors If there are fewer online processors at run time than at bind time, DB2 reformulates the parallel degree to take best advantage of the current processing power. This reformulation is indicated by a value in H in the accounting report. Locking considerations for repeatable read applications: For CP parallelism, locks are obtained independently by each task. Be aware that this situation can possibly increase the total number of locks taken for applications that: v Use an isolation level of repeatable read v Use CP parallelism v Repeatedly access the table space using a lock mode of IS without issuing COMMITs Recommendation: As is recommended for all repeatable-read applications, issue frequent COMMITs to release the lock resources that are held. Repeatable read or read stability isolation cannot be used with Sysplex query parallelism.
The default value for CURRENT DEGREE is 1 unless your installation has changed the default for the CURRENT DEGREE special register. v Set the parallel sequential threshold (VPPSEQT) to 0. v Add a row to your resource limit facilitys specification table (RLST) for your plan, package, or authorization ID with the RLFFUNC value set to 3 to disable I/O parallelism, 4 to disable CP parallelism, or 5 to disable Sysplex query parallelism. To disable all types of parallelism, you need a row for all three types (assuming that Sysplex query parallelism is enabled on your system.) In a system with a very high processor utilization rate (that is, greater than 98 percent), I/O parallelism might be a better choice because of the increase in processor overhead with CP parallelism. In this case, you could disable CP parallelism for your dynamic queries by putting a 4 in the resource limit specification table for the plan or package. If you have a Sysplex, you might want to use a 5 to disable Sysplex query parallelism, depending on how high processor utilization is in the members of the data sharing group.
1005
To determine if parallelism has been disabled by a value in your resource limit specification table (RLST), look for a non-zero value in field QXRLFDPA in IFCID 0002 or 0003 (shown in G in Figure 119 on page 1003). The QW0022RP field in IFCID 0022 indicates whether this particular statement was disabled. For more information about how the resource limit facility governs modes of parallelism, see Descriptions of the RLST columns on page 725.
1006
Administration Guide
Characteristics of DRDA
With DRDA, the application can remotely bind packages and can execute packages of static or dynamic SQL that have previously been bound at that location. DRDA has the following characteristics and benefits: v With DRDA access, an application can access data at any server that supports DRDA, not just a DB2 server on a z/OS operating system. v DRDA supports all SQL features, including user-defined functions, LOBs, and stored procedures. v DRDA can avoid multiple binds and minimize the number of binds that are required. v DRDA supports multiple-row FETCH. DRDA is the preferred method for remote access with DB2.
1007
BIND options
If appropriate for your applications, consider the following bind options to improve performance: v Use the DEFER(PREPARE) bind option, which can reduce the number of messages that must be sent back and forth across the network. For more information on using the DEFER(PREPARE) option, see Part 4 of DB2 Application Programming and SQL Guide. v Bind application plans and packages with ISOLATION(CS) to reduce contention and message overhead.
1008
Administration Guide
decrease your processor costs. For more information on how to use stored procedures, see Part 6 of DB2 Application Programming and SQL Guide. Use the RELEASE statement and the DISCONNECT(EXPLICIT) bind option The RELEASE statement minimizes the network traffic that is needed to release a remote connection at commit time. For example, if the application has connections to several different servers, specify the RELEASE statement when the application has completed processing for each server. The RELEASE statement does not close cursors, release any resources, or prevent further use of the connection until the COMMIT is issued. It just makes the processing at COMMIT time more efficient. The bind option DISCONNECT(EXPLICIT) destroys all remote connections for which RELEASE was specified. Use the COMMIT ON RETURN YES clause Consider using the COMMIT ON RETURN YES clause of the CREATE PROCEDURE statement to indicate that DB2 should issue an implicit COMMIT on behalf of the stored procedure upon return from the CALL statement. Using the clause can reduce the length of time locks are held and can reduce network traffic. With COMMIT ON RETURN YES, any updates made by the client before calling the stored procedure are committed with the stored procedure changes. See Part 6 of DB2 Application Programming and SQL Guide for more information. Set the CURRENT RULES special register to DB2 When requesting LOB data, set the CURRENT RULES special register to DB2 instead of to STD before performing a CONNECT. A value of DB2, which is the default, can offer performance advantages. When a DB2 UDB for z/OS server receives an OPEN request for a cursor, the server uses the value in the CURRENT RULES special register to determine whether the application intends to switch between LOB values and LOB locator values when fetching different rows in the cursor. If you specify a value of DB2 for CURRENT RULES, the application indicates that the first FETCH request will specify the format for each LOB column in the answer set and that the format will not change in a subsequent FETCH request. However, if you set the value of CURRENT RULES to STD, the application intends to fetch a LOB column into either a LOB locator host variable or a LOB host variable. Although a value of STD for CURRENT RULES gives you more programming flexibility when you retrieve LOB data, you can get better performance if you use a value of DB2. With the STD option, the server will not block the cursor, while with the DB2 option it may block the cursor where it is possible to do so. For more information, see LOB data and its effect on block fetch for DRDA on page 1011.
1009
Both types of block fetch are used for DRDA and private protocol, but the implementation of continuous block fetch for DRDA is slightly different than that for private protocol. Continuous block fetch: In terms of response times, continuous block fetch is most efficient for larger result sets because fewer messages are transmitted and because overlapped processing is performed at the requester and the server. However, continuous block fetch uses more networking resources than limited block fetch. When networking resources are critical, use limited block fetch to run applications. The requester can use both forms of blocking at the same time and with different servers. If an application is doing read-only processing and can use continuous block fetch, the sequence goes like this: 1. The requester sends a message to open a cursor and begins fetching the block of rows at the server. 2. The server sends back a block of rows and the requester begins processing the first row. 3. The server continues to send blocks of rows to the requester, without further prompting. The requester processes the second and later rows as usual, but fetches them from a buffer on the requesters system. For private protocol, continuous block fetch uses one conversation for each open cursor. Having a dedicated conversation for each cursor allows the server to continue sending until all the rows are returned. For DRDA, only one conversation is used, and it must be made available to the other SQL statements that are in the application. Thus, the server usually sends back a subset of all the rows. The number of rows that the server sends depends on the following factors: v The size of each row v The number of extra blocks that are requested by the requesting system compared to the number of extra blocks that the server will return For a DB2 UDB for z/OS requester, the EXTRA BLOCKS REQ field on installation panel DSNTIP5 determines the maximum number of extra blocks requested. For a DB2 UDB for z/OS server, the EXTRA BLOCKS SRV field on installation panel DSNTIP5 determines the maximum number of extra blocks allowed. Example: Suppose that the requester asks for 100 extra query blocks and that the server allows only 50. The server returns no more than 50 extra query blocks. The server might choose to return fewer than 50 extra query blocks for any number of reasons that DRDA allows. v Whether continuous block fetch is enabled, and the number of extra rows that the server can return if it regulates that number. To enable continuous block fetch for DRDA and to regulate the number of extra rows sent by a DB2 UDB for z/OS server, you must use the OPTIMIZE FOR n ROWS clause on your SELECT statement. See Optimizing for very large results sets for DRDA on page 1014 for more information. If you want to use continuous block fetch for DRDA, have the application fetch all the rows of the cursor before doing any other SQL. Fetching all the rows first
| | | |
1010
Administration Guide
prevents the requester from having to buffer the data, which can consume a lot of storage. Choose carefully which applications should use continuous block fetch for DRDA. Limited block fetch: Limited block fetch guarantees the transfer of a minimum amount of data in response to each request from the requesting system. With limited block fetch, a single conversation is used to transfer messages and data between the requester and server for multiple cursors. Processing at the requester and server is synchronous. The requester sends a request to the server, which causes the server to send a response back to the requester. The server must then wait for another request to tell it what should be done next. Block fetch with scrollable cursors for DRDA: When a DB2 UDB for z/OS requester uses a scrollable cursor to retrieve data from a DB2 UDB for z/OS server, the following conditions are true: v The requester never requests more than 64 rows in a query block, even if more rows fit in the query block. In addition, the requester never requests extra query blocks. This is true even if the setting of field EXTRA BLOCKS REQ in the DISTRIBUTED DATA FACILITY PANEL 2 installation panel on the requester allows extra query blocks to be requested. v The requester discards rows of the result table if the application does not use those rows. Example: If the application fetches row n and then fetches row n+2, the requester discards row n+1. The application gets better performance for a blocked scrollable cursor if it mostly scrolls forward, fetches most of the rows in a query block, and avoids frequent switching between FETCH ABSOLUTE statements with negative and positive values. v If the scrollable cursor does not use block fetch, the server returns one row for each FETCH statement. LOB data and its effect on block fetch for DRDA: For a non-scrollable blocked cursor, the server sends all the non-LOB data columns for a block of rows in one message, including LOB locator values. As each row is fetched by the application, the requester obtains the non-LOB data columns directly from the query block. If the row contains non-null and non-zero length LOB values, those values are retrieved from the server at that time. This behavior limits the impact to the network by pacing the amount of data that is returned at any one time. If all LOB data columns are retrieved into LOB locator host variables or if the row does not contain any non-null or non-zero length LOB columns, then the whole row can be retrieved directly from the query block. For a scrollable blocked cursor, the LOB data columns are returned at the same time as the non-LOB data columns. When the application fetches a row that is in the block, a separate message is not required to get the LOB columns. Ensuring block fetch: General-use Programming Interface To use either limited or continuous block fetch, DB2 must determine that the cursor is not used for updating or deleting. The easiest way to indicate that the cursor does not modify data is to add the FOR FETCH ONLY or FOR READ ONLY clause to the query in the DECLARE CURSOR statement as in the following example:
Chapter 36. Tuning and monitoring in a distributed environment
1011
EXEC SQL DECLARE THISEMP CURSOR FOR SELECT EMPNO, LASTNAME, WORKDEPT, JOB FROM DSN8810.EMP WHERE WORKDEPT = 'D11' FOR FETCH ONLY END-EXEC.
If you do not use FOR FETCH ONLY or FOR READ ONLY, DB2 still uses block fetch for the query if the following conditions are true: v The cursor is a non-scrollable cursor, and the result table of the cursor is read-only. This applies to static and dynamic cursors except for read-only views. (See Chapter 5 of DB2 SQL Reference for information about declaring a cursor as read-only.) v The cursor is a scrollable cursor that is declared as INSENSITIVE, and the result table of the cursor is read-only. v The cursor is a scrollable cursor that is declared as SENSITIVE, the result table of the cursor is read-only, and the value of bind option CURRENTDATA is NO. v The result table of the cursor is not read-only, but the cursor is ambiguous, and the value of bind option CURRENTDATA is NO. A cursor is ambiguous when: It is not defined with the clauses FOR FETCH ONLY, FOR READ ONLY, or FOR UPDATE OF. It is not defined on a read-only result table. It is not the target of a WHERE CURRENT clause on an SQL UPDATE or DELETE statement. It is in a plan or package that contains the SQL statements PREPARE or EXECUTE IMMEDIATE. DB2 triggers block fetch for static SQL only when it can detect that no updates or deletes are in the application. For dynamic statements, because DB2 cannot detect what follows in the program, the decision to use block fetch is based on the declaration of the cursor. DB2 does not use continuous block fetch if the following conditions are true: v The cursor is referred to in the statement DELETE WHERE CURRENT OF elsewhere in the program. v The cursor statement appears that it can be updated at the requesting system. (DB2 does not check whether the cursor references a view at the server that cannot be updated.) The following three tables summarize the conditions under which a DB2 server uses block fetch.
1012
Administration Guide
Table 199 shows the conditions for a scrollable cursor that is not used to retrieve a stored procedure result set.
Table 199. Effect of CURRENTDATA and isolation level on block fetch for a scrollable cursor that is not used for a stored procedure result set Isolation level CS, RR, or RS Cursor sensitivity INSENSITIVE CURRENTDATA Yes No SENSITIVE Yes Cursor type Read-only Read-only Read-only Updatable Ambiguous No Read-only Updatable Ambiguous UR INSENSITIVE Yes No SENSITIVE Yes No Read-only Read-only Read-only Read-only Block fetch Yes Yes No No No Yes No Yes Yes Yes Yes Yes
Table 200 shows the conditions for a scrollable cursor that is used to retrieve a stored procedure result set.
Table 200. Effect of CURRENTDATA and isolation level on block fetch for a scrollable cursor that is used for a stored procedure result set Isolation level CS, RR, or RS Cursor sensitivity INSENSITIVE CURRENTDATA Yes No SENSITIVE Yes No Cursor type Read-only Read-only Read-only Read-only Block fetch Yes Yes No Yes
1013
Table 200. Effect of CURRENTDATA and isolation level on block fetch for a scrollable cursor that is used for a stored procedure result set (continued) Isolation level UR Cursor sensitivity INSENSITIVE CURRENTDATA Yes No SENSITIVE Yes No Cursor type Read-only Read-only Read-only Read-only Block fetch Yes Yes Yes Yes
| | |
v The DB2 server value for the EXTRA BLOCKS SRV field on installation panel DSNTIP5. The maximum value that you can specify is 100. v The clients extra query block limit, which is obtained from the DRDA MAXBLKEXT parameter received from the client. When DB2 UDB for z/OS acts as a DRDA client, you set this parameter at installation time with the EXTRA BLOCKS REQ field on installation panel DSNTIP5. The maximum value that you can specify is 100. DB2 Connect sets the MAXBLKEXT parameter to 1 (unlimited). If the client does not support extra query blocks, the DB2 server on z/OS automatically reduces the value of n to match the number of rows that fit within a DRDA query block.
1014
Administration Guide
Recommendation for cursors that are defined WITH HOLD: Do not set a large number of query blocks for cursors that are defined WITH HOLD. If the application commits while there are still a lot of blocks in the network, DB2 buffers the blocks in the requesters memory (the ssnmDIST address space if the requester is a DB2 UDB for z/OS) before the commit can be sent to the server. For examples of performance problems that can occur from not using OPTIMIZE FOR n ROWS when downloading large amounts of data, see Part 4 of DB2 Application Programming and SQL Guide.
The OPTIMIZE FOR value of 20 rows is used for network blocking and access path selection. When you use FETCH FIRST n ROWS ONLY, DB2 might use a fast implicit close. Fast implicit close means that during a distributed query, the DB2 server automatically closes the cursor when it prefetches the nth row if FETCH FIRST n ROWS ONLY is specified or when there are no more rows to return. Fast implicit close can improve performance because it can save an additional network transmission between the client and the server. DB2 uses fast implicit close when the following conditions are true: v The query uses limited block fetch. v The query retrieves no LOBs. v The cursor is not a scrollable cursor. v Either of the following conditions is true:
Chapter 36. Tuning and monitoring in a distributed environment
1015
| | |
The cursor is declared WITH HOLD, and the package or plan that contains the cursor is bound with the KEEPDYNAMIC(YES) option. The cursor is declared WITH HOLD and the DRDA client passes the QRYCLSIMP parameter set to SERVER MUST CLOSE, SERVER DECIDES, or SERVER MUST NOT CLOSE. The cursor is not defined WITH HOLD. When you use FETCH FIRST n ROWS ONLY and DB2 does a fast implicit close, the DB2 server closes the cursor after it prefetches n rows, or when there are no more rows.
| | | | | | | | |
Serving system
For access using DB2 private protocol, the serving system is the DB2 system on which the SQL is dynamically executed. For access using DRDA, the serving system is the system on which your remotely bound package executes. If you are executing a package on a remote DBMS, then improving performance on the server depends on the nature of the server. If the remote DBMS on which the package executes is another DB2, then the information in Chapter 34, Using EXPLAIN to improve SQL performance, on page 931 is appropriate for access path considerations. Other considerations that could affect performance on a DB2 server are: v The maximum number of database access threads that the server allows to be allocated concurrently. (This is the MAX REMOTE ACTIVE field on installation panel DSNTIPE.) A request can be queued while waiting for an available thread. Making sure that requesters commit frequently can let threads be used by other requesters. See Setting thread limits for database access threads on page 741 for more information. v The priority of database access threads on the remote system. A low priority could impede your applications distributed performance. See Using z/OS workload management to set performance objectives on page 745 for more information. v For instructions on avoiding RACF calls at the server, see Controlling requests from remote applications on page 238, and more particularly Do you manage inbound IDs through DB2 or RACF? on page 243. When DB2 is the server, it is a good idea to activate accounting trace class 7. This provides accounting information at the package level, which can be very useful in determining performance problems.
1016
Administration Guide
1017
Accounting elapsed times Requester Server Cls1 Cls2 Cls3 Cls3* Cls1 Cls2 Cls3 (1) (2) (3) (4) (5) (6) (7)
Create thread
SQL
SQL
SQL
Commit
Figure 120. Elapsed times in a DDF environment as reported by OMEGAMON. These times are valid for access that uses either DRDA or private protocol (except as noted).
This figure is a simplified picture of the processes that go on in the serving system. It does not show block fetch statements and is only applicable to a single row retrieval. The various elapsed times referred to in the header are: v (1) - Requester Cls1 This time is reported in the ELAPSED TIME field under the APPL (CLASS 1) column near the top of the OMEGAMON accounting long trace for the requesting DB2 subsystem. It represents the elapsed time from the creation of the allied distributed thread until the termination of the allied distributed thread. v (2) - Requester Cls2 This time is reported in the ELAPSED TIME field under the DB2 (CLASS 2) column near the top of the OMEGAMON accounting long trace for the
1018
Administration Guide
requesting DB2 subsystem. It represents the elapsed time from when the application passed the SQL statements to the local DB2 system until return. This is considered In DB2 time. v (3) - Requester Cls3 This time is reported in the TOTAL CLASS 3 field under the CLASS 3 SUSP column near the top of the OMEGAMON accounting long trace for the requesting DB2 system. It represents the amount of time the requesting DB2 system spent suspended waiting for locks or I/O. v (4) - Requester Cls3* (Requester wait time for activities not in DB2) This time is reported in the SERVICE TASK SWITCH, OTHER SERVICE field of the OMEGAMON accounting report for the requesting DB2 subsystem. It is typically time spent waiting for the network and server to process the request. v (5) - Server Cls1 This time is reported in the ELAPSED TIME field under the APPL (CLASS 1) column near the top of the OMEGAMON accounting long trace for the serving DB2 subsystem. It represents the elapsed time from the creation of the database access thread until the termination of the database access thread. v (6) - Server Cls2 This time is reported in the ELAPSED TIME field under the DB2 (CLASS 2) column near the top of the OMEGAMON accounting long trace of the serving DB2 subsystem. It represents the elapsed time to process the SQL statements and the commit at the server. v (7) - Server Cls3 This time is reported in the TOTAL CLASS 3 field under the CLASS 3 SUSP column near the top of the OMEGAMON accounting long trace for the serving DB2 subsystem. It represents the amount of time the serving DB2 system spent suspended waiting for locks or I/O. The Class 2 processing time (the TCB time) at the requester does not include processing time at the server. To determine the total Class 2 processing time, add the Class 2 time at the requester to the Class 2 time at the server. Likewise, add the getpage counts, prefetch counts, locking counts, and I/O counts of the requester to the equivalent counts at the server. For private protocol, SQL activity is counted at both the requester and server. For DRDA, SQL activity is counted only at the server.
---- DISTRIBUTED ACTIVITY -------------------------------------------------------------------------SERVER : BOEBDB2SERV SUCCESSFULLY ALLOC.CONV: C N/A MSG.IN BUFFER E : 0 PRODUCT ID : DB2 CONVERSATION TERMINATED: N/A PRODUCT VERSION : V8 R1 M0 MAX OPEN CONVERSATIONS : N/A PREPARE SENT : 1 METHOD : DRDA PROTOCOL CONT->LIM.BL.FTCH SWCH : D N/A LASTAGN.SENT : 0 REQUESTER ELAP.TIME : 0.685629 MESSAGES SENT : 3 SERVER ELAPSED TIME : N/A COMMIT(2) RESP.RECV. : 1 MESSAGES RECEIVED: 2 SERVER CPU TIME : N/A BKOUT(2) R.R. : 0 BYTES SENT : 9416 DBAT WAITING TIME : 0.026118 TRANSACT.SENT : 1 BYTES RECEIVED : 1497 COMMIT (2) SENT : 1 COMMIT(1)SENT : 0 BLOCKS RECEIVED : 0 BACKOUT(2) SENT : 0 ROLLB(1)SENT : 0 STMT BOUND AT SER: F N/A CONVERSATIONS INITIATED: A 1 SQL SENT : 0 CONVERSATIONS QUEUED : B 0 ROWS RECEIVED : 1 FORGET RECEIVED : 0
| | |
Figure 121. DDF block of a requester thread from a OMEGAMON accounting long trace
1019
---- DISTRIBUTED ACTIVITY -------------------------------------------------------------------------REQUESTER : BOEBDB2REQU ROLLBK(1) RECEIVED : 0 PREPARE RECEIVED : 1 PRODUCT ID : DB2 SQL RECEIVED : 0 LAST AGENT RECV. : 1 PRODUCT VERSION : V8 R1 M0 COMMIT(2) RESP.SENT: 1 THREADS INDOUBT : 0 METHOD : DRDA PROTOCOL BACKOUT(2)RESP.SENT: 0 MESSAGES.IN BUFFER : 0 COMMIT(2) RECEIVED : 1 BACKOUT(2)PERFORMED: 0 ROWS SENT : 0 BACKOUT(2) RECEIVED: 0 MESSAGES SENT : 3 BLOCKS SENT : 0 COMMIT(2) PERFORMED: 1 MESSAGES RECEIVED : 5 CONVERSAT.INITIATED: 1 TRANSACTIONS RECV. : 1 BYTES SENT : 643 FORGET SENT : 0 COMMITS(1) RECEIVED : 0 BYTES RECEIVED : 3507
Figure 122. DDF block of a server thread from a OMEGAMON accounting long trace
The accounting distributed fields for each serving or requesting location are collected from the viewpoint of this thread communicating with the other location identified. For example, SQL sent from the requester is SQL received at the server. Do not add together the distributed fields from the requester and the server. Several fields in the distributed section merit specific attention. The number of conversations is reported in several fields: v The number of conversation allocations is reported as CONVERSATIONS INITIATED ( A ). v The number of conversation requests queued during allocation is reported as CONVERSATIONS QUEUED ( B ). v The number of successful conversation allocations is reported as SUCCESSFULLY ALLOC.CONV ( C ). v The number of times a switch was made from continuous block fetch to limited block fetch is reported as CONT->LIM.BL.FTCH ( D ). This is only applicable to access that uses DB2 private protocol. You can use the difference between initiated allocations and successful allocations to identify a session resource constraint problem. If the number of conversations queued is high, or if the number of times a switch was made from continuous to limited block fetch is high, you might want to tune VTAM to increase the number of conversations. VTAM and network parameter definitions are important factors in the performance of DB2 distributed processing. For more information, see VTAM for MVS/ESA Network Implementation Guide. Bytes sent, bytes received, messages sent, and messages received are recorded at both the requester and the server. They provide information on the volume of data transmitted. However, because of the way distributed SQL is processed for private protocol, more bytes may be reported as sent than are reported as received. To determine the percentage of the rows transmitted by block fetch, compare the total number of rows sent to the number of rows sent in a block fetch buffer, which is reported as MSG.IN BUFFER ( E ). The number of rows sent is reported at the server, and the number of rows received is reported at the requester. Block fetch can significantly affect the number of rows sent across the network. The number of SQL statements bound for remote access is the number of statements dynamically bound at the server for private protocol. This field is maintained at the requester and is reported as STMT BOUND AT SER ( F ). Because of the manner in which distributed SQL is processed, the number of rows that are reported might differ slightly from the number of rows that are received. However, a significantly lower number of rows received may indicate that the application did not fetch the entire answer set. This is especially true for access that uses DB2 private protocol.
1020
Administration Guide
Duration of an enclave
Using threads in INACTIVE MODE for DRDA-only connections on page 742 describes the difference between threads that are always active and those that can be pooled. If the thread is always active, the duration of the thread is the duration of the enclave. If the thread can be pooled, the following conditions determine the duration of an enclave: v If the associated package is bound with KEEPDYNAMIC(NO), or there are no open held cursors, or there are active declared temporary tables, the duration of the enclave is the period during which the thread is active. v If the associated package is bound with KEEPDYNAMIC(YES), and no held cursors or active declared temporary tables exist, and only KEEPDYNAMIC(YES) keeps the thread from being pooled, the duration of the enclave is the period from the beginning to the end of the transaction. While a thread is pooled, such as during think time, it is not using an enclave. Therefore, SMF 72 record does not report inactive periods. ACTIVE MODE threads are treated as a single enclave from the time it is created until the time it is terminated. This means that the entire life of the database access thread is reported in the SMF 72 record, regardless of whether SQL work is actually being processed. Figure 123 on page 1022 contrasts the two types of threads.
1021
COMMIT
SELECT
COMMIT
INACTIVE MODE
Inactive
Active
Inactive
Enclave
Enclave
ACTIVE MODE
Figure 123. Contrasting ACTIVE MODE threads and POOLED MODE threads
Queue time: Note that the information that is reported back to RMF includes queue time. This particular queue time includes waiting for a new or existing thread to become available.
1022
Administration Guide
Chapter 37. Monitoring and tuning stored procedures and user-defined functions
Stored procedures that are created in Version 8 of DB2 and all user-defined functions must run in WLM-established address spaces. This chapter primarily presents performance tuning information for user-defined functions and stored procedures in WLM-established address spaces, which is the same for both. This chapter contains these topics: v Controlling address space storage v Assigning procedures and functions to WLM application environments on page 1024 v Providing DB2 cost information for accessing user-defined table functions on page 1025 v Monitoring stored procedures with the accounting trace on page 1026 v Accounting for nested activities on page 1028 Stored procedures that were created in a release of DB2 prior to Version 8 can run in a DB2-established stored procedures address space. For information about a DB2-established address space and how it compares to WLM-established address spaces, see Comparing the types of stored procedure address spaces on page 1029.
Each task control block that runs in a WLM-established stored procedures address space uses approximately 200 KB below the 16-MB line. DB2 needs this storage for stored procedures and user-defined functions because you can create both main programs and subprograms, and DB2 must create an environment for each.
1023
| | |
A stored procedure can invoke only one utility in one address space at any given time because of the resource requirements of utilities. On the WLM Application-Environment panel, set NUMTCB to 1. See Figure 124 on page 1025. With NUMTCB=1 or NUMTCB being forced to 1, multiple WLM address spaces are created to run each concurrent utility request that comes from a stored procedure call. Dynamically extending load libraries: Use partitioned data set extended (PDSEs) for load libraries containing stored procedures. Using PDSEs may eliminate your need to stop and start the stored procedures address space due to growth of the load libraries. If a load library grows from additions or replacements, the library may have to be extended. If you use PDSEs for the load libraries, the new extent information is dynamically updated and you do not need to stop and start the address space. If PDSs are used, load failures may occur because the new extent information is not available.
1024
Administration Guide
Application-Environment Notes Options Help -----------------------------------------------------------------------Create an Application Environment Command ===> ___________________________________________________________ Application Environment Name Description . . . . . . . . Subsystem Type . . . . . . . Procedure Name . . . . . . . Start Parameters . . . . . . . . . . . : . . . . WLMENV2 Large Stored Proc Env. DB2 DSN1WLM DB2SSN=DB2A,NUMTCB=2,APPLENV=WLMENV2 _______________________________________ ___________________________________
Select one of the following options. 1 1. Multiple server address spaces are allowed. 2. Only 1 server address space per MVS system is allowed.
Figure 124. WLM panel to create an application environment. You can also use the variable &IWMSSNM for the DB2SSN parameter (DB2SSN=&IWMSSNM). This variable represents the name of the subsystem for which you are starting this address space. This variable is useful for using the same JCL procedure for multiple DB2 subsystems.
4. Specify the WLM application environment name for the WLM_ENVIRONMENT option on CREATE or ALTER PROCEDURE (or FUNCTION) to associate a stored procedure or user-defined function with an application environment. 5. Using the install utility in the WLM application, install the WLM service definition that contains information about this application environment into the couple data set. 6. Activate a WLM policy from the installed service definition. 7. Begin running stored procedures.
Chapter 37. Monitoring and tuning stored procedures and user-defined functions
1025
These values, along with the CARDINALITY value of the table being accessed, are used by DB2 to determine the cost. The results of the calculations can influence such things as the join sequence for a multi-table join and the cost estimates generated for and used in predictive governing. Determine values for the four fields by examining the source code for the table function. Estimate the I/Os by examining the code executed during the FIRST call and FINAL call. Look for the code executed during the OPEN, FETCH, and CLOSE calls. The costs for the OPEN and CLOSE calls can be amortized over the expected number of rows returned. Estimate the I/O cost by providing the number of I/Os that will be issued. Include the I/Os for any file access. Calculate the instruction cost by counting the number of high level instructions executed in the user-defined table function and multiplying it by a factor of 20. For assembler programs, the instruction cost is the number of assembler instructions. If SQL statements are issued within the user-defined table function, use DB2 Estimator to determine the number of instructions and I/Os for the statements. Examining the JES job statistics for a batch program doing equivalent functions can also be helpful. For all fields, a precise number of instructions is not required. Because DB2 already accounts for the costs of invoking table functions, these costs should not be included in the estimates. Example: The following statement shows how these fields can be updated. The authority to update is the same authority as that required to update any catalog statistics column.
UPDATE SYSIBM.SYSROUTINES SET IOS_PER_INVOC = 0.0, INSTS_PER_INVOC = 4.5E3, INITIAL_IOS = 2.0 INITIAL_INSTS = 1.0E4, CARDINALITY = 5E3 WHERE SCHEMA = 'SYSADM' AND SPECIFICNAME = 'FUNCTION1' AND ROUTINETYPE = 'F';
1026
Administration Guide
AVERAGE APPL(CL.1) DB2 (CL.2) IFI (CL.5) ------------ ---------- ---------- ---------ELAPSED TIME 0.072929 0.029443 N/P NONNESTED 0.072929 0.029443 N/A STORED PROC A 0.000000 0.000000 N/A UDF 0.000000 0.000000 N/A TRIGGER 0.000000 0.000000 N/A CPU TIME AGENT NONNESTED STORED PRC UDF TRIGGER PAR.TASKS 0.026978 0.026978 0.026978 0.000000 0.000000 0.000000 0.000000 0.018994 0.018994 0.018994 0.000000 0.000000 0.000000 0.000000 0.010444 0.010444 0.000000 N/A N/A 0.000004 182.00 0.00 0.00 N/A N/A N/P N/A N/P N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/P N/P
SUSPEND TIME 0.000000 AGENT N/A PAR.TASKS N/A STORED PROC B 0.000000 UDF 0.000000 NOT ACCOUNT. DB2 ENT/EXIT EN/EX-STPROC EN/EX-UDF DCAPT.DESCR. LOG EXTRACT. . . . N/A N/A N/A N/A N/A N/A
CLASS 3 SUSPENSIONS AVERAGE TIME AV.EVENT -------------------- ------------ -------LOCK/LATCH(DB2+IRLM) 0.000011 0.04 SYNCHRON. I/O 0.010170 9.16 DATABASE I/O 0.006325 8.16 LOG WRITE I/O 0.003845 1.00 OTHER READ I/O 0.000000 0.00 OTHER WRTE I/O 0.000148 0.04 SER.TASK SWTCH 0.000115 0.04 UPDATE COMMIT 0.000000 0.00 OPEN/CLOSE 0.000000 0.00 SYSLGRNG REC 0.000115 0.04 EXT/DEL/DEF 0.000000 0.00 OTHER SERVICE 0.000000 0.00 ARC.LOG(QUIES) 0.000000 0.00 ARC.LOG READ 0.000000 0.00 DRAIN LOCK 0.000000 0.00 CLAIM RELEASE 0.000000 0.00 PAGE LATCH 0.000000 0.00 NOTIFY MSGS 0.000000 0.00 GLOBAL CONTENTION 0.000000 0.00 COMMIT PH1 WRITE I/O 0.000000 0.00 ASYNCH CF REQUESTS 0.000000 0.00 TOTAL CLASS 3 0.010444 9.28
HIGHLIGHTS -------------------------#OCCURRENCES : 193 #ALLIEDS : 0 #ALLIEDS DISTRIB: 0 #DBATS : 193 #DBATS DISTRIB. : 0 #NO PROGRAM DATA: 0 #NORMAL TERMINAT: 193 #ABNORMAL TERMIN: 0 #CP/X PARALLEL. : 0 #IO PARALLELISM : 0 #INCREMENT. BIND: 0 #COMMITS : 193 #ROLLBACKS : 0 #SVPT REQUESTS : 0 #SVPT RELEASE : 0 #SVPT ROLLBACK : 0 MAX SQL CASC LVL: 0 UPDATE/COMMIT : 40.00 SYNCH I/O AVG. : 0.001110
STORED PROCEDURES AVERAGE TOTAL ----------------- -------- -------CALL STATEMENTS C 0.00 0 ABENDED 0.00 0 TIMED OUT D 0.00 0 REJECTED 0.00 0 . . .
| | |
Descriptions of fields: v The part of the total CPU time that was spent satisfying stored procedures requests is indicated in A . v The amount of time spent waiting for a stored procedure to be scheduled and the time that is needed to return control to DB2 after the stored procedure has completed is indicated in B . v The number of calls to stored procedures is indicated in C . v The number of times a stored procedure timed out waiting to be scheduled is shown in D . What to do for excessive timeouts or wait time: If you have excessive wait time ( B ) or timeouts ( D ) for user-defined functions or stored procedures, the possible causes include:
| | |
v The goal of the service class that is assigned to the WLM stored procedures address space, as it was initially started, is not high enough. The address space uses this goal to honor requests to start processing stored procedures. v The priority of the service class that is running the stored procedure is not high enough. v Make sure that the application environment is available by using the z/OS command DISPLAY WLM,APPLENV=applenv. If the application environment is quiesced, WLM does not start any address spaces for that environment; CALL statements are queued or rejected.
Chapter 37. Monitoring and tuning stored procedures and user-defined functions
1027
Table 202 shows the formula used to determine time for nested activities.
Table 202. Sample for time used for execution of nested activities Count for Formula T21-T0 T21-T0 T2-T1 + T4-T3 + T20-T19 T2-T1 + T4-T3 + T20-T19 T6-T4 + T19-T18 T6-T4 + T19-T18 T7-T6 + T18T17 T11-T6 + T18-T16 T11-T6 + T18-T16 T9-T8 + T11-T10 + T17-16 T9-T8 + T11-T10 + T17-T16 Class 1 1 2 2 2 2 3 1 1 2 2
| | | | | | |
Application elapsed Application task control block (TU) Application in DB2 elapsed Application in DB2 task control block (TU) Trigger in DB2 elapsed Trigger in DB2 task control block (TU) Wait for STP time Stored procedure elapsed Stored procedure task control block (TU) Stored procedure SQL elapsed Stored procedure SQL elapsed
1028
Administration Guide
Table 202. Sample for time used for execution of nested activities (continued) Count for Wait for user-defined function time User-defined function elapsed Formula T12-T11 T16-T11 Class 3 1 1 2 2
User-defined function task control T16-T11 block (TU) User-defined function SQL elapsed User-defined function SQL task control block (TU) Note: TU = time used. T14-T13 T14-T13
| | |
The total class 2 time is the total of the in DB2 times for the application, trigger, stored procedure, and user-defined function. The class 1 wait times for the stored procedures and user-defined functions need to be added to the total class 3 times.
v Reduces demand for storage below the 16MB v Can be difficult to support more line and thereby removes the limitation on the than 50 stored procedures running number of procedures and functions that can at the same time because of storage run concurrently. that language products need below the 16MB line. v Only one utility can be invoked by a stored procedure in one address space at any given time. The start parameter NUMTCB on the WLM Application-Environment panel has to be set to 1. Incoming requests for stored procedures are handled in a first-in, first-out order. Stored procedures run at the priority of the stored procedures address space. Requests are handled in priority order. Using z/OS workload management to set performance objectives on page 745 Using z/OS workload management to set performance objectives on page 745
Stored procedures inherit the z/OS dispatching priority of the DB2 thread that issues the CALL statement. User-defined functions inherit the priority of the DB2 thread that invoked the procedure.
Chapter 37. Monitoring and tuning stored procedures and user-defined functions
1029
Table 203. Comparing WLM-established and DB2-established stored procedures (continued) DB2-established No ability to customize the environment. WLM-established Each address space is associated with a WLM application environment that you specify. An application environment is an attribute that you associate on the CREATE statement for the function or procedure. The environment determines which JCL procedure is used to run a particular stored procedure. Can run as a MAIN or SUB program. SUB programs can run significantly faster, but the subprogram must do more initialization and cleanup processing itself rather than relying on Language Environment to handle that. More information Assigning procedures and functions to WLM application environments on page 1024
DB2 Application You can access non-relational data. If the non-relational data is managed by RRS, the Programming and SQL updates to that data are part of your SQL unit of Guide work. Procedures or functions can access protected z/OS resources with one of three authorities, as specified on the SECURITY option of the CREATE FUNCTION or CREATE PROCEDURE statement: v The authority of the WLM-established address space (SECURITY=DB2) v The authority of the invoker of the stored procedure or user-defined function (SECURITY=USER) v The authority of the definer of the stored procedure or user-defined function (SECURITY=DEFINER) DB2 Administration Guide
The tuning information for stored procedures in a DB2-established address space is very similar to the information for user-defined functions and stored procedures in a WLM-established address space: v You can control address space storage as described in Controlling address space storage on page 1023. Each task control block that runs in the DB2-established stored procedures address space requires approximately 100 KB below the 16MB line. v As described in Monitoring stored procedures with the accounting trace on page 1026 Accounting for nested activities on page 1028, the processing done by a stored procedure running in a DB2-established address spaces is included in DB2s class 1 and class 2 accounting times. Possible causes of excessive wait time include: Someone issued the DB2 command STOP PROCEDURE ACTION(QUEUE) that caused requests to queue up for a long time and time out. The stored procedures are hanging onto the ssnmSPAS task control blocks for too long. In this case, you need to find out why this is happening. If you get many DB2 lock suspensions, you might have too many ssnmSPAS task control blocks, causing them to encounter too many lock conflicts with one another. You might need to make code changes to your application or change your database design to reduce the number of lock suspensions.
1030
Administration Guide
If the stored procedures are starting and stopping quickly, you might not have enough ssnmSPAS task control blocks to handle the work load. In this case, increase the number on field NUMBER OF TCBS on installation panel DSNTIPX.
Chapter 37. Monitoring and tuning stored procedures and user-defined functions
1031
1032
Administration Guide
Part 6. Appendixes
1033
1034
Administration Guide
1035
Relationship to other tables: The activity table is a parent table of the project activity table, through a foreign key on column ACTNO.
Because the table is self-referencing, and also is part of a cycle of dependencies, its foreign keys must be added later with these statements:
ALTER TABLE DSN8810.DEPT FOREIGN KEY RDD (ADMRDEPT) REFERENCES DSN8810.DEPT ON DELETE CASCADE; ALTER TABLE DSN8810.DEPT FOREIGN KEY RDE (MGRNO) REFERENCES DSN8810.EMP ON DELETE SET NULL;
Content of the department table: Table 206 shows the content of the columns.
Table 206. Columns of the department table Column 1 Column Name DEPTNO Description Department ID, the primary key
1036
Administration Guide
Table 206. Columns of the department table (continued) Column 2 3 4 Column Name DEPTNAME MGRNO ADMRDEPT Description A name describing the general activities of the department Employee number (EMPNO) of the department manager ID of the department to which this department reports; the department at the highest level reports to itself The remote location name
LOCATION
The LOCATION column contains nulls until sample job DSNTEJ6 updates this column with the location name. Relationship to other tables: The table is self-referencing: the value of the administering department must be a department ID. The table is a parent table of:
Appendix A. DB2 sample tables
1037
v The employee table, through a foreign key on column WORKDEPT v The project table, through a foreign key on column DEPTNO. It is a dependent of the employee table, through its foreign key on column MGRNO.
Content of the employee table: Table 209 shows the content of the columns. The table has a check constraint, NUMBER, which checks that the phone number is in the numeric range 0000 to 9999.
Table 209. Columns of the employee table Column 1 2 3 4 5 6 7 8 9 Column Name EMPNO FIRSTNME MIDINIT LASTNAME WORKDEPT PHONENO HIREDATE JOB EDLEVEL Description Employee number (the primary key) First name of employee Middle initial of employee Last name of employee ID of department in which the employee works Employee telephone number Date of hire Job held by the employee Number of years of formal education
1038
Administration Guide
Table 209. Columns of the employee table (continued) Column 10 11 12 13 14 Column Name SEX BIRTHDATE SALARY BONUS COMM Description Sex of the employee (M or F) Date of birth Yearly salary in dollars Yearly bonus in dollars Yearly commission in dollars
Table 211 and Table 212 on page 1040 show the content of the employee table:
Table 211. Left half of DSN8810.EMP: employee table. Note that a blank in the MIDINIT column is an actual value of rather than null.
EMPNO 000010 000020 000030 000050 000060 000070 000090 000100 000110 000120 000130 000140 000150 000160 000170 000180 000190 000200 000210 000220 000230 000240 000250 000260 000270 000280 000290 000300 000310 000320 000330 000340 FIRSTNME CHRISTINE MICHAEL SALLY JOHN IRVING EVA EILEEN THEODORE VINCENZO SEAN DOLORES HEATHER BRUCE ELIZABETH MASATOSHI MARILYN JAMES DAVID WILLIAM JENNIFER JAMES SALVATORE DANIEL SYBIL MARIA ETHEL JOHN PHILIP MAUDE RAMLAL WING JASON MIDINIT I L A B F D W Q G M A R J S H T K J M S P L R R X F V R LASTNAME HAAS THOMPSON KWAN GEYER STERN PULASKI HENDERSON SPENSER LUCCHESSI OCONNELL QUINTANA NICHOLLS ADAMSON PIANKA YOSHIMURA SCOUTTEN WALKER BROWN JONES LUTZ JEFFERSON MARINO SMITH JOHNSON PEREZ SCHNEIDER PARKER SMITH SETRIGHT MEHTA LEE GOUNOT WORKDEPT A00 B01 C01 E01 D11 D21 E11 E21 A00 A00 C01 C01 D11 D11 D11 D11 D11 D11 D11 D11 D21 D21 D21 D21 D21 E11 E11 E11 E11 E21 E21 E21 PHONENO 3978 3476 4738 6789 6423 7831 5498 0972 3490 2167 4578 1793 4510 3782 2890 1682 2986 4501 0942 0672 2094 3780 0961 8953 9001 8997 4502 2095 3332 9990 2103 5698 HIREDATE 1965-01-01 1973-10-10 1975-04-05 1949-08-17 1973-09-14 1980-09-30 1970-08-15 1980-06-19 1958-05-16 1963-12-05 1971-07-28 1976-12-15 1972-02-12 1977-10-11 1978-09-15 1973-07-07 1974-07-26 1966-03-03 1979-04-11 1968-08-29 1966-11-21 1979-12-05 1969-10-30 1975-09-11 1980-09-30 1967-03-24 1980-05-30 1972-06-19 1964-09-12 1965-07-07 1976-02-23 1947-05-05
1039
Table 211. Left half of DSN8810.EMP: employee table (continued). Note that a blank in the MIDINIT column is an actual value of rather than null.
EMPNO 200010 200120 200140 200170 200220 200240 200280 200310 200330 200340 FIRSTNME DIAN GREG KIM KIYOSHI REBA ROBERT EILEEN MICHELLE HELENA ROY MIDINIT J N K M R F R LASTNAME HEMMINGER ORLANDO NATZ YAMAMOTO JOHN MONTEVERDE SCHWARTZ SPRINGER WONG ALONZO WORKDEPT A00 A00 C01 D11 D11 D21 E11 E11 E21 E21 PHONENO 3978 2167 1793 2890 0672 3780 8997 3332 2103 5698 HIREDATE 1965-01-01 1972-05-05 1976-12-15 1978-09-15 1968-08-29 1979-12-05 1967-03-24 1964-09-12 1976-02-23 1947-05-05
1040
Administration Guide
Relationship to other tables: The table is a parent table of: v The department table, through a foreign key on column MGRNO v The project table, through a foreign key on column RESPEMP. It is a dependent of the department table, through its foreign key on column WORKDEPT.
DB2 requires an auxiliary table for each LOB column in a table. These statements define the auxiliary tables for the three LOB columns in DSN8810.EMP_PHOTO_RESUME:
CREATE AUX TABLE DSN8810.AUX_BMP_PHOTO IN DSN8D81L.DSN8S81M STORES DSN8810.EMP_PHOTO_RESUME COLUMN BMP_PHOTO; CREATE AUX TABLE DSN8810.AUX_PSEG_PHOTO IN DSN8D81L.DSN8S81L STORES DSN8810.EMP_PHOTO_RESUME COLUMN PSEG_PHOTO; CREATE AUX TABLE DSN8810.AUX_EMP_RESUME IN DSN8D81L.DSN8S81N STORES DSN8810.EMP_PHOTO_RESUME COLUMN RESUME;
Content of the employee photo and resume table: Table 213 shows the content of the columns.
Table 213. Columns of the employee photo and resume table Column 1 Column Name EMPNO Description Employee ID (the primary key)
1041
Table 213. Columns of the employee photo and resume table (continued) Column 2 3 4 5 Column Name EMP_ROWID PSEG_PHOTO BMP_PHOTO RESUME Description Row ID to uniquely identify each row of the table. DB2 supplies the values of this column. Employee photo, in PSEG format Employee photo, in BMP format Employee resume
Table 214 shows the indexes for the employee photo and resume table:
Table 214. Indexes of the employee photo and resume table Name DSN8810.XEMP_PHOTO_RESUME On Column EMPNO Type of Index Primary, ascending
Table 215 shows the indexes for the auxiliary tables for the employee photo and resume table:
Table 215. Indexes of the auxiliary tables for the employee photo and resume table Name DSN8810.XAUX_BMP_PHOTO DSN8810.XAUX_PSEG_PHOTO DSN8810.XAUX_EMP_RESUME On Table DSN8810.AUX_BMP_PHOTO DSN8810.AUX_PSEG_PHOTO DSN8810.AUX_EMP_RESUME Type of Index Unique Unique Unique
Relationship to other tables: The table is a parent table of the project table, through a foreign key on column RESPEMP.
1042
Administration Guide
Because the table is self-referencing, the foreign key for that restraint must be added later with:
ALTER TABLE DSN8810.PROJ FOREIGN KEY RPP (MAJPROJ) REFERENCES DSN8810.PROJ ON DELETE CASCADE;
Content of the project table: Table 216 shows the content of the columns.
Table 216. Columns of the project table Column 1 2 3 4 5 Column Name PROJNO PROJNAME DEPTNO RESPEMP PRSTAFF Description Project ID (the primary key) Project name ID of department responsible for the project ID of employee responsible for the project Estimated mean number of persons needed between PRSTDATE and PRENDATE to achieve the whole project, including any subprojects Estimated project start date Estimated project end date ID of any project of which this project is a part
6 7 8
Relationship to other tables: The table is self-referencing: a nonnull value of MAJPROJ must be a project number. The table is a parent table of the project activity table, through a foreign key on column PROJNO. It is a dependent of: v The department table, through its foreign key on DEPTNO v The employee table, through its foreign key on RESPEMP.
1043
ON DELETE RESTRICT, FOREIGN KEY RPAA (ACTNO) REFERENCES DSN8810.ACT ON DELETE RESTRICT) IN DSN8D81A.DSN8S81P CCSID EBCDIC;
Content of the project activity table: Table 218 shows the content of the columns.
Table 218. Columns of the project activity table Column 1 2 3 4 5 Column Name PROJNO ACTNO ACSTAFF ACSTDATE ACENDATE Description Project ID Activity ID Estimated mean number of employees needed to staff the activity Estimated activity start date Estimated activity completion date
Relationship to other tables: The table is a parent table of the employee to project activity table, through a foreign key on columns PROJNO, ACTNO, and EMSTDATE. It is a dependent of: v The activity table, through its foreign key on column ACTNO v The project table, through its foreign key on column PROJNO
1044
Administration Guide
FOREIGN KEY REPAE (EMPNO) REFERENCES DSN8810.EMP ON DELETE RESTRICT) IN DSN8D81A.DSN8S81P CCSID EBCDIC;
Content of the employee to project activity table: Table 220 shows the content of the columns.
Table 220. Columns of the employee to project activity table Column 1 2 3 4 5 6 Column Name EMPNO PROJNO ACTNO EMPTIME EMSTDATE EMENDATE Description Employee ID number Project ID of the project ID of the activity within the project A proportion of the employees full time (between 0.00 and 1.00) to be spent on the activity Date the activity starts Date the activity ends
Table 221 shows the indexes for the employee to project activity table:
Table 221. Indexes of the employee to project activity table Name DSN8810.XEMPPROJACT1 DSN8810.XEMPPROJACT2 On Columns PROJNO, ACTNO, EMSTDATE, EMPNO EMPNO Type of Index Unique, ascending Ascending
Relationship to other tables: The table is a dependent of: v The employee table, through its foreign key on column EMPNO v The project activity table, through its foreign key on columns PROJNO, ACTNO, and EMSTDATE. | | | | | | | | | | | |
1045
| | | | | | | | | | |
This table has no indexes Relationship to other tables: This table has no relationship to other tables.
CASCADE DEPT SET NULL RESTRICT EMP RESTRICT RESTRICT EMP_PHOTO_RESUME RESTRICT CASCADE PROJ RESTRICT PROJACT RESTRICT EMPPROJACT ACT RESTRICT SET NULL
RESTRICT
1046
Administration Guide
Table 223. Views on sample tables View name VDEPT VHDEPT VEMP On tables or views Used in application DEPT DEPT EMP Organization Project Distributed organization Distributed organization Organization Project Project Project Project Project Organization Organization
VPROJ VACT VPROJACT VEMPPROJACT VDEPMG1 VEMPDPT1 VASTRDE1 VASTRDE2 VPROJRE1 VPSTRDE1 VPSTRDE2 VFORPLA VSTAFAC1 VSTAFAC2
PROJ ACT PROJACT EMPROJACT DEPT EMP DEPT EMP DEPT VDEPMG1 EMP PROJ EMP VPROJRE1 VPROJRE2 VPROJRE1 VPROJRE1 EMPPROJACT PROJACT ACT EMPPROJACT ACT EMP EMP DEPT EMP
VPHONE VEMPLP
Phone Phone
The following SQL statements are used to create the sample views:
CREATE VIEW DSN8810.VDEPT AS SELECT ALL DEPTNO , DEPTNAME, MGRNO , ADMRDEPT FROM DSN8810.DEPT; Figure 128. VDEPT
1047
CREATE VIEW DSN8810.VHDEPT AS SELECT ALL DEPTNO , DEPTNAME, MGRNO , ADMRDEPT, LOCATION FROM DSN8810.DEPT; Figure 129. VHDEPT
CREATE VIEW DSN8810.VEMP AS SELECT ALL EMPNO , FIRSTNME, MIDINIT , LASTNAME, WORKDEPT FROM DSN8810.EMP; Figure 130. VEMP CREATE VIEW DSN8810.VPROJ AS SELECT ALL PROJNO, PROJNAME, DEPTNO, RESPEMP, PRSTAFF, PRSTDATE, PRENDATE, MAJPROJ FROM DSN8810.PROJ ; Figure 131. VPROJ CREATE VIEW DSN8810.VACT AS SELECT ALL ACTNO , ACTKWD , ACTDESC FROM DSN8810.ACT ; Figure 132. VACT CREATE VIEW DSN8810.VPROJACT AS SELECT ALL PROJNO,ACTNO, ACSTAFF, ACSTDATE, ACENDATE FROM DSN8810.PROJACT ; Figure 133. VPROJACT CREATE VIEW DSN8810.VEMPPROJACT AS SELECT ALL EMPNO, PROJNO, ACTNO, EMPTIME, EMSTDATE, EMENDATE FROM DSN8810.EMPPROJACT ; Figure 134. VEMPPROJACT CREATE VIEW DSN8810.VDEPMG1 (DEPTNO, DEPTNAME, MGRNO, FIRSTNME, MIDINIT, LASTNAME, ADMRDEPT) AS SELECT ALL DEPTNO, DEPTNAME, EMPNO, FIRSTNME, MIDINIT, LASTNAME, ADMRDEPT FROM DSN8810.DEPT LEFT OUTER JOIN DSN8810.EMP ON MGRNO = EMPNO ; Figure 135. VDEPMG1
1048
Administration Guide
CREATE VIEW DSN8810.VEMPDPT1 (DEPTNO, DEPTNAME, EMPNO, FRSTINIT, MIDINIT, LASTNAME, WORKDEPT) AS SELECT ALL DEPTNO, DEPTNAME, EMPNO, SUBSTR(FIRSTNME, 1, 1), MIDINIT, LASTNAME, WORKDEPT FROM DSN8810.DEPT RIGHT OUTER JOIN DSN8810.EMP ON WORKDEPT = DEPTNO ; Figure 136. VEMPDPT1
CREATE VIEW DSN8810.VASTRDE1 (DEPT1NO,DEPT1NAM,EMP1NO,EMP1FN,EMP1MI,EMP1LN,TYPE2, DEPT2NO,DEPT2NAM,EMP2NO,EMP2FN,EMP2MI,EMP2LN) AS SELECT ALL D1.DEPTNO,D1.DEPTNAME,D1.MGRNO,D1.FIRSTNME,D1.MIDINIT, D1.LASTNAME, '1', D2.DEPTNO,D2.DEPTNAME,D2.MGRNO,D2.FIRSTNME,D2.MIDINIT, D2.LASTNAME FROM DSN8810.VDEPMG1 D1, DSN8810.VDEPMG1 D2 WHERE D1.DEPTNO = D2.ADMRDEPT ; Figure 137. VASTRDE1 CREATE VIEW DSN8810.VASTRDE2 (DEPT1NO,DEPT1NAM,EMP1NO,EMP1FN,EMP1MI,EMP1LN,TYPE2, DEPT2NO,DEPT2NAM,EMP2NO,EMP2FN,EMP2MI,EMP2LN) AS SELECT ALL D1.DEPTNO,D1.DEPTNAME,D1.MGRNO,D1.FIRSTNME,D1.MIDINIT, D1.LASTNAME,'2', D1.DEPTNO,D1.DEPTNAME,E2.EMPNO,E2.FIRSTNME,E2.MIDINIT, E2.LASTNAME FROM DSN8810.VDEPMG1 D1, DSN8810.EMP E2 WHERE D1.DEPTNO = E2.WORKDEPT; Figure 138. VASTRDE2 CREATE VIEW DSN8810.VPROJRE1 (PROJNO,PROJNAME,PROJDEP,RESPEMP,FIRSTNME,MIDINIT, LASTNAME,MAJPROJ) AS SELECT ALL PROJNO,PROJNAME,DEPTNO,EMPNO,FIRSTNME,MIDINIT, LASTNAME,MAJPROJ FROM DSN8810.PROJ, DSN8810.EMP WHERE RESPEMP = EMPNO ; Figure 139. VPROJRE1 CREATE VIEW DSN8810.VPSTRDE1 (PROJ1NO,PROJ1NAME,RESP1NO,RESP1FN,RESP1MI,RESP1LN, PROJ2NO,PROJ2NAME,RESP2NO,RESP2FN,RESP2MI,RESP2LN) AS SELECT ALL P1.PROJNO,P1.PROJNAME,P1.RESPEMP,P1.FIRSTNME,P1.MIDINIT, P1.LASTNAME, P2.PROJNO,P2.PROJNAME,P2.RESPEMP,P2.FIRSTNME,P2.MIDINIT, P2.LASTNAME FROM DSN8810.VPROJRE1 P1, DSN8810.VPROJRE1 P2 WHERE P1.PROJNO = P2.MAJPROJ ; Figure 140. VPSTRDE1
1049
CREATE VIEW DSN8810.VPSTRDE2 (PROJ1NO,PROJ1NAME,RESP1NO,RESP1FN,RESP1MI,RESP1LN, PROJ2NO,PROJ2NAME,RESP2NO,RESP2FN,RESP2MI,RESP2LN) AS SELECT ALL P1.PROJNO,P1.PROJNAME,P1.RESPEMP,P1.FIRSTNME,P1.MIDINIT, P1.LASTNAME, P1.PROJNO,P1.PROJNAME,P1.RESPEMP,P1.FIRSTNME,P1.MIDINIT, P1.LASTNAME FROM DSN8810.VPROJRE1 P1 WHERE NOT EXISTS (SELECT * FROM DSN8810.VPROJRE1 P2 WHERE P1.PROJNO = P2.MAJPROJ) ; Figure 141. VPSTRDE2
CREATE VIEW DSN8810.VFORPLA (PROJNO,PROJNAME,RESPEMP,PROJDEP,FRSTINIT,MIDINIT,LASTNAME) AS SELECT ALL F1.PROJNO,PROJNAME,RESPEMP,PROJDEP, SUBSTR(FIRSTNME, 1, 1), MIDINIT, LASTNAME FROM DSN8810.VPROJRE1 F1 LEFT OUTER JOIN DSN8810.EMPPROJACT F2 ON F1.PROJNO = F2.PROJNO; Figure 142. VFORPLA CREATE VIEW DSN8810.VSTAFAC1 (PROJNO, ACTNO, ACTDESC, EMPNO, FIRSTNME, MIDINIT, LASTNAME, EMPTIME,STDATE,ENDATE, TYPE) AS SELECT ALL PA.PROJNO, PA.ACTNO, AC.ACTDESC,' ', ' ', ' ', ' ', PA.ACSTAFF, PA.ACSTDATE, PA.ACENDATE,'1' FROM DSN8810.PROJACT PA, DSN8810.ACT AC WHERE PA.ACTNO = AC.ACTNO ; Figure 143. VSTAFAC1 CREATE VIEW DSN8810.VSTAFAC2 (PROJNO, ACTNO, ACTDESC, EMPNO, FIRSTNME, MIDINIT, LASTNAME, EMPTIME,STDATE, ENDATE, TYPE) AS SELECT ALL EP.PROJNO, EP.ACTNO, AC.ACTDESC, EP.EMPNO,EM.FIRSTNME, EM.MIDINIT, EM.LASTNAME, EP.EMPTIME, EP.EMSTDATE, EP.EMENDATE,'2' FROM DSN8810.EMPPROJACT EP, DSN8810.ACT AC, DSN8810.EMP EM WHERE EP.ACTNO = AC.ACTNO AND EP.EMPNO = EM.EMPNO ; Figure 144. VSTAFAC2
1050
Administration Guide
CREATE VIEW DSN8810.VPHONE (LASTNAME, FIRSTNAME, MIDDLEINITIAL, PHONENUMBER, EMPLOYEENUMBER, DEPTNUMBER, DEPTNAME) AS SELECT ALL LASTNAME, FIRSTNME, MIDINIT , VALUE(PHONENO,' EMPNO, DEPTNO, DEPTNAME FROM DSN8810.EMP, DSN8810.DEPT WHERE WORKDEPT = DEPTNO; Figure 145. VPHONE
'),
CREATE VIEW DSN8810.VEMPLP (EMPLOYEENUMBER, PHONENUMBER) AS SELECT ALL EMPNO , PHONENO FROM DSN8810.EMP ; Figure 146. VEMPLP
Storage group:
DSN8Gvr0
Databases:
In addition to the storage group and databases shown in Figure 147, the storage group DSN8G81U and database DSN8D81U are created when you run DSNTEJ2A.
1051
Storage group
The default storage group, SYSDEFLT, created when DB2 is installed, is not used to store sample application data. The storage group used to store sample application data is defined by this statement:
CREATE STOGROUP DSN8G810 VOLUMES (DSNV01) VCAT DSNC810;
Databases
| | | The default database, created when DB2 is installed, is not used to store the sample application data. DSN8D81P is the database that is used for tables that are related to programs. The remainder of the databases are used for tables that are related to applications. They are defined by the following statements:
CREATE DATABASE DSN8D81A STOGROUP DSN8G810 BUFFERPOOL BP0 CCSID EBCDIC; CREATE DATABASE DSN8D81P STOGROUP DSN8G810 BUFFERPOOL BP0 CCSID EBCDIC; CREATE DATABASE DSN8D81L STOGROUP DSN8G810 BUFFERPOOL BP0 CCSID EBCDIC;
| | | | | | | |
CREATE DATABASE DSN8D81E STOGROUP DSN8G810 BUFFERPOOL BP0 CCSID UNICODE; CREATE DATABASE DSN8D81U STOGROUP DSN8G81U CCSID EBCDIC;
Table spaces
The following table spaces are explicitly defined by the following statements. The table spaces not explicitly defined are created implicitly in the DSN8D81A database, using the default space attributes.
CREATE TABLESPACE DSN8S81D IN DSN8D81A USING STOGROUP DSN8G810 PRIQTY 20 SECQTY 20 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S81E IN DSN8D81A USING STOGROUP DSN8G810 PRIQTY 20 SECQTY 20 ERASE NO NUMPARTS 4 (PART 1 USING STOGROUP DSN8G810 PRIQTY 12
1052
Administration Guide
SECQTY 12, PART 3 USING STOGROUP DSN8G810 PRIQTY 12 SECQTY 12) LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO COMPRESS YES CCSID EBCDIC; CREATE TABLESPACE DSN8S81B IN DSN8D81L USING STOGROUP DSN8G810 PRIQTY 20 SECQTY 20 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE LOB TABLESPACE DSN8S81M IN DSN8D81L LOG NO; CREATE LOB TABLESPACE DSN8S81L IN DSN8D81L LOG NO; CREATE LOB TABLESPACE DSN8S81N IN DSN8D81L LOG NO; CREATE TABLESPACE DSN8S81C IN DSN8D81P USING STOGROUP DSN8G810 PRIQTY 160 SECQTY 80 SEGSIZE 4 LOCKSIZE TABLE BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S81P IN DSN8D81A USING STOGROUP DSN8G810 PRIQTY 160 SECQTY 80 SEGSIZE 4 LOCKSIZE ROW BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S81R IN DSN8D81A USING STOGROUP DSN8G810 PRIQTY 20 SECQTY 20 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S81S IN DSN8D81A USING STOGROUP DSN8G810
Appendix A. DB2 sample tables
1053
PRIQTY 20 SECQTY 20 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC;
| | | | | | | | | | | | | | | | | | | |
CREATE TABLESPACE DSN8S81Q IN DSN8D81P USING STOGROUP DSN8G810 PRIQTY 160 SECQTY 80 SEGSIZE 4 LOCKSIZE PAGE BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S81U IN DSN8D81E USING STOGROUP DSN8G810 PRIQTY 5 SECQTY 5 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID UNICODE;
1054
Administration Guide
1055
2. Locate the following statement, which comes after the reference point: 3. Replace the statement with the following statement:
By changing the statement, you avoid an abend with SQLCODE -922. The routine with the new statement provides no secondary IDs unless you use AUTH=GROUP.
1056
Administration Guide
Figure 148. How a connection or sign-on parameter list points to other information
1057
14 1C 24
EXPLSITE
2C
EXPLLUNM
3C
Character, 8 bytes
EXPLNTID
44
Character, 17 bytes
1058
Administration Guide
Table 225. Authorization ID list for a connection or sign-on exit routine Name AIDLPRIM AIDLCODE AIDLTLEN AIDLEYE AIDLSQL AIDLSCNT AIDLSAPM Hex offset 0 8 A C 10 18 1C Data type Character, 8 bytes Character, 2 bytes Signed 2-byte integer Character, 4 bytes Character, 8 bytes Signed 4-byte integer Address Description Primary authorization ID for input and output; see descriptions in the text. Control block identifier. Total length of control block. Eyecatcher for block, AIDL. On output, the current SQL ID. Number of entries allocated to secondary authorization ID list. Always equal to 1012. For a sign-on routine only, the address of an 8-character additional authorization ID. If RACF is active, the ID is the user ID's connected group name. If the address was not provided, the field contains zero. Storage key of the ID pointed to by AIDLSAPM. To move that ID, use the move with key (MVCK) instruction, specifying this key. Reserved Reserved The address of the ACEE structure, if known; otherwise, zero Length of data area returned by RACF, plus 4 bytes Reserved List of the secondary authorization IDs, 8 bytes each
AIDLCKEY
20
Character, 1 byte
21 24 28 2C 30 4A
Character, 3 bytes Signed 4-byte integer Signed 4-byte integer Signed 4-byte integer 26 bytes Character, maximum x 8 bytes
1059
If RACF is active, the field used contains a verified RACF user ID; otherwise, it contains blanks. v The primary ID for a remote request is the ID passed in the conversation attach request header (SNA FMH5) or in the DRDA SECCHK command. v The SQL ID contains blanks. v The list of secondary IDs contains blanks.
1060
Administration Guide
If the returned value of the SQL ID is blank, DB2 makes it equal to the value of the primary ID. If the list of secondary IDs is blank, it remains blank. No default secondary IDs exist. Your routine must also set a return code in word 5 of the exit parameter list to allow or deny access (field EXPLARC). By those means you can deny the connection altogether. The code must have one of the values that are shown in Table 226.
Table 226. Required return code in EXPLARC Value 0 12 Meaning Access allowed; continue processing. Access denied; terminate.
1061
Section 3 Section 3 performs the following steps: 1. The SQL ID is set equal to the primary ID. 2. If the TSO data set name prefix is a valid primary or secondary ID, the SQL ID is replaced with the TSO data set name prefix. Otherwise, the SQL ID remains set to the primary ID. In the sample sign-on routine (DSN3SSGN): The three sections of the sample sign-on routine perform the following functions: Section 1 Section 1 does not change the primary ID. Section 2 Section 2 sets the SQL ID to the value of the primary ID. Section 3 Section 3 tests RACF options and makes the following changes in the list of secondary IDs, which is initially blank: v If RACF is not active, the list remains blank. v If the list of groups option is active, section 3 attempts to find an existing ACEE from which to copy the authorization ID list. If AIDLACEE contains a valid ACEE, it is used. Otherwise, look for a valid ACEE chained from the TCB or from the ASXB or, if no usable ACEE exists, issue RACROUTE to have RACF build an ACEE structure for the primary ID. Copy the list of group names from the ACEE structure into the secondary authorization list. If the exit issued RACROUTE to build an ACEE, another RACROUTE macro is issued and the structure is deleted. v If a list of secondary authorization IDs has not been built, and AIDLSAPM is not zero, the data that is pointed to by AIDLSAPM is copied into AIDLSEC.
1062
Administration Guide
Subsystem support sign-on recovery: The sign-on ESTAE recovery routine DSN3SIES generates the VRADATA entries that are shown in Table 228. The last entry, key VRAIMO, is generated only if the abend occurred within the sign-on exit routine.
Table 228. VRADATA entries that are generated by DSN3SIES VRA keyname VRAFPI VRAFP Key hex value 22 23 Data length Content 8 20 Constant 'SIESTRAK' v v v v Primary authorization ID (CCBUSER) AGNT block address Identify-level CCB block address Sign-on-level CCB block address
VRAIMO
7C
10
v Sign-on exit load module load point address v Sign-on exit entry point address v Offset of failing address in the PSW from the sign-on exit entry point address
Diagnostics for connection exit routines and sign-on exit routines: The connection (identify) recovery routine and the sign-on recovery routine provide diagnostics for the corresponding exit routines. The diagnostics are produced only when the abend occurs in the exit routine. The following diagnostics are available: Dump title The component failing module name is DSN3@ATH for a connection exit or DSN3@SGN for a sign-on exit.
1063
z/OS and RETAIN symptom data SDWA symptom data fields SDWACSCT (CSECT/) and SDWAMODN (MOD/) are set to DSN3@ATH or DSN3@SGN, as appropriate. Summary dump additions The AIDL, if addressable, and the SADL, if present, are included in the summary dump for the failing allied agent. If the failure occurred in connection or sign-on processing, the exit parameter list (EXPL) is also included. If the failure occurred in the system services address space, the entire SADL storage pool is included in the summary dump. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The unqualified names are defined as VARCHAR(128), and the values are defined as VARCHAR(255). The exit routines must provide these values in Unicode CCSID 1208.
1064
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | # # # # | | | | | | |
# # #
1065
The bulk of the work in the routine is for authorization checking. When DB2 must determine the authorization for a privilege, it invokes your routine. The routine determines the authorization for the privilege and then indicates to DB2 whether the privilege is authorized or not authorized, or whether DB2 should do its own authorization check, instead. When the exit routine is bypassed: In the following situations, the exit routine is not called to check authorization: v The authorization ID that DB2 uses to determine access has installation SYSADM or installation SYSOPR authority (where installation SYSOPR authority is sufficient to authorize the request). This authorization check is made strictly within DB2. For example, if the execute privilege is being checked on a package, DB2 performs the check on the plan owner that this package is in. If the plan owner has installation SYSADM, the routine is not called. v DB2 security has been disabled. (You can disable DB2 security by specifying NO on the USE PROTECTION field of installation panel DSNTIPP). v Authorization has been cached from a prior check. v In a prior invocation of the exit routine, the routine indicated that it should not be called again. v GRANT statements. The routine executes in the ssnmDBM1 address space of DB2. General considerations for writing exit routines on page 1108 applies to this routine, but with the following exceptions to the description of execution environments: v The routine executes in non-cross-memory mode during initialization and termination (XAPLFUNC of 1 or 3, described in Table 230 on page 1071). v During authorization checking the routine can execute under a TCB or SRB in cross-memory or non-cross-memory mode.
# # # # # #
1066
Administration Guide
1067
The ACEE passed in XAPLACEE is that of the primary authorization ID, XAPLUPRM. # # # # The implications of the XAPLUPRM and XAPLUCHK relationship need to be clearly understood. XAPLUCHK, the authorization ID that DB2 uses to perform authorization may be the primary authorization ID (XAPLUPRM), a secondary authorization ID, or another authorization ID such as a package owner. If the RACF access control module is used, the following rules apply: v RACF uses the ACEE of the primary authorization ID (XAPLUPRM) to perform authorization. v Secondary authorization IDs are not implemented in RACF. RACF groups should be used instead. Examples: The following examples show how the rules apply: v A plan or package may be bound successfully by using the privileges of the binder (XAPLUPRM). Then only the EXECUTE privilege on the plan or package is needed to execute it. If at some point this plan or package is marked invalid (for instance, if a table it depends upon is dropped and recreated), the next execution of it will cause an AUTOBIND, which will usually fail. It fails because the AUTOBIND checks the privilege on the runner. The plan or package must be rebound by using an authorization ID that has the necessary privileges. v If the OWNER on the BIND command is based on secondary authorization IDs, which are not supported by RACF. RACF groups should be used instead. v SET CURRENT SQLID can set a qualifier, but it cannot change authorization. v DYNAMIC RULES settings have a limited effect on which authorization ID is checked. Only the primary authorization ID and secondary IDs that are valid RACF groups for this user are considered. v User-defined function and stored procedure authorizations involve several authorization IDs, such as implementer, definer, invoker, and so forth. Only the primary authorization ID and secondary IDs that are RACF groups are considered.
# # # #
| | | | | | | | | | | | | | | | | |
1068
Administration Guide
Dropping views
When a privilege that is required to create a view is revoked, the view is dropped. Similar to the revocation of plan privileges, such an event is not communicated to DB2 by the authorization checking routine. If you want DB2 to drop the view when a privilege is revoked, you must use the SQL statements GRANT and REVOKE.
1069
transition table that existed at the time of the original BIND or REBIND of the package or plan for the invoking program. | | | | | | | | | | | | |
. . .
Figure 149. How an authorization routine's parameter list points to other information
The work area (4096 bytes) is obtained once during the startup of DB2 and only released when DB2 is shut down. The work area is shared by all invocations to the exit routine. At invocation, registers are set as described in Registers at invocation for exit routines on page 1109, and the authorization checking routine uses the standard exit parameter list (EXPL) described there. Table 230 on page 1071 shows the exit-specific parameter list, described by macro DSNDXAPL.
1070
Administration Guide
# Table 230. Parameter list for the access control authorization routine. Field names indicated by an asterisk (*) apply to # initialization, termination, and authorization checking. Other fields apply to authorization checking only. # # Name # XAPLCBID #
*
Hex offset 0 2
Data type Character, 2-bytes Signed, 2-byte integer Character, 4 bytes Character, 8 bytes Character, 8 bytes Character, 8 bytes Address
Description Control block identifier; value X'216A'. Length of XAPL; value X'100' (decimal 256).
4 8 10 18
Control block eye catcher; value XAPL. DB2 version and level; for example, VxRxMx. The store clock value when the exit is invoked. Use this to correlate information to this specific invocation. STOKEN of the address space in which XAPLACEE resides. Binary zeroes indicate that XAPLACEE is in the home address space. ACEE address: v Of the DB2 address space (ssnmDBM1) when XAPLFUNC is 1 or 3. v Of the primary authorization ID associated with this agent when XAPLFUNC is 2. There may be cases were an ACEE address is not available for an agent. In such cases this field contains binary zeroes.
20
Input
Character, 8 bytes
Input
One of the following IDs: v When XAPLFUNC is 1 or 3, it contains the User ID of the DB2 address space (ssnmDBM1) v When XAPLFUNC is 2, it contains the primary authorization ID associated with the agent
2C
Input
2E 32 38
Input
DB2 group attachment name for data sharing. The DB2 subsystem name if not data sharing. Reserved
Input
DB2 privilege being checked. See macro DSNXAPRV for a complete listing of privileges.
1071
# Table 230. Parameter list for the access control authorization routine (continued). Field names indicated by an # asterisk (*) apply to initialization, termination, and authorization checking. Other fields apply to authorization checking # only. # # Name # XAPLTYPE # # # # # # # # # # # # # # | | # XAPLFLG1 # # # # # # # # # # # # # | | | | | | | # XAPLUCHK # #
3C Address, 4 bytes Input Hex offset 3A Data type Character, 1 Input or output Input Description DB2 object type: D Database R Table space T Table P Application plan K Package S Storage group C Collection B Buffer pool U System privilege E Distinct type F User-defined function M Schema O Stored procedure J JAR V View Q Sequence The highest-order bit, bit 8, (XAPLCHKS) is on if the secondary IDs associated with this authorization ID (XAPLUCHK) are included in DB2's authorization check. If it is off, only this authorization ID is checked. Bit 7 (XAPLUTB) is on if this is a table or view privilege (SELECT, INSERT, and so on) and if SYSCTRL is not sufficient authority to perform the specified operation on a table or view. SYSCTRL does not have the privilege of accessing user data unless the privilege is specifically granted to it. Bit 6 (XAPLAUTO) is on if this is an AUTOBIND. See Access control authorization exit routine on page 1065 for more information on function resolution during an AUTOBIND. Bit 5 (XAPLCRVW) is on if the installation parameter DBADM CREATE AUTH is set to YES. Bit 4 (XAPLRDWR) is on if the privilege is a write privilege. If the privilege is a read-only privilege, bit 4 is off. Bit 3 (XAPLFSUP) is on to suppress error messages from the CRTVUAUTT authorization check during the creation of a materialized query table. These error messages are caused by intermediate checks that do not affect the final result. The remaining 2 bits are reserved. Address to the authorization ID on which DB2 performs the check. It could be the primary, secondary, or some other ID. This is a VARCHAR(128) field.
3B
Character, 1
Input
1072
Administration Guide
# Table 230. Parameter list for the access control authorization routine (continued). Field names indicated by an # asterisk (*) apply to initialization, termination, and authorization checking. Other fields apply to authorization checking # only. # # Name # XAPLOBJN # # # # # # # # # # # # # # # | | # # # XAPLOWNQ # # # # # XAPLREL1 # # # # # XAPLREL2 # # # # XAPLDBSP # # # # XAPLRSV2 #
54 Character, 79 bytes 50 Address, 4 bytes Input 4C Address, 4 bytes Input 48 Address, 4 bytes Input 44 Address, 4 bytes Input Hex offset 40 Data type Address, 4 bytes Input or output Input Description Address to the unqualified name of the object with which the privilege is associated. This is a VARCHAR(128) field.It is one of the following names: Name Database Table space Table Application plan Package Storage group Collection Buffer pool Schema Distinct type User-defined function JAR View Sequence Length 8 8 VARCHAR(128) 8 VARCHAR(128) VARCHAR(128) VARCHAR(128) 8 VARCHAR(128) VARCHAR(128) VARCHAR(128) VARCHAR(128) VARCHAR(128) VARCHAR(128)
For special system privileges (SYSADM, SYSCTRL, and so on) this field might contain binary zeroes. See macro DSNXAPRV. Address of the object owner (creator) or object qualifier. The contents of this parameter depends on either the privilege being checked or the object. See Table 232 on page 1075. This is a VARCHAR(128) field. If this field is not applicable, it contains binary zeros. Address of other related information 1. The contents of this parameter depend on either the privilege being checked or the object. See Table 232 on page 1075. This is a VARCHAR(128) field. If this field is not applicable, it contains binary zeros. Address of other related information 2. The contents of this parameter depends on the privilege being checked. See Table 232 on page 1075. This is a VARCHAR(128) field. If this field is not applicable, it contains binary zeros. Address of database information. This information is passed for CREATE VIEW and CREATE ALIAS. It points to database information as described in Table 231 on page 1074. If this field is not applicable, it contains binary zeros. Reserved.
1073
# Table 230. Parameter list for the access control authorization routine (continued). Field names indicated by an # asterisk (*) apply to initialization, termination, and authorization checking. Other fields apply to authorization checking # only. # # Name # XAPLFROM # # # # # # # XAPLXBTS # # # # XAPLONWT # # # # # # XAPLRSV3 # # XAPLDIAG # #
AF B0 Character, 1 byte Character, 80 Output bytes AE Character, 1 byte Output A4 Timestamp, 10 bytes Input Hex offset A3 Data type Character, 1 byte Input or output Input Description Source of the request: S Remote request that uses DB2 private protocol. Not a remote request that uses DB2 private protocol. DB2 authorization restricts remote requests that use DB2 private protocol to the SELECT, UPDATE, INSERT and DELETE privileges. The function resolution timestamp. Authorizations received prior to this timestamp are valid. Applicable to functions and procedures. See DB2 SQL Reference for more information on function resolution. Information required by DB2 from the exit routine for the UPDATE and REFERENCES table privileges: Value Explanation Requester has privilege on the entire table
* Requester has privilege on just this column See macro DSNXAPRV for definition of these privileges. Reserved. Information returned by the exit routine to help diagnose problems.
Table 231 has database information for determining authorization for creating a view. The address to this parameter list is in XAPLREL2. See Table 232 on page 1075 for more information on CREATE VIEW.
Table 231. Parameter list for the access control authorization routinedatabase information Name XAPLDBNP XAPLDBNM Hex offset Data type 0 4 Address Input or output Input Description Address of information for the next database. X'00000000' indicates no next database exists. Database name.
1074
Administration Guide
Table 231. Parameter list for the access control authorization routinedatabase information (continued) Name XAPLDBDA Hex offset Data type C Input or output Description Required by DB2 from the exit routine for CREATE VIEW. A value of Y and EXPLRC1=0 indicate that the user ID in field XAPLUCHK has database administrator authority on the database in field XAPLDBNM. When the exit checks if XAPLUCHK can create a view for another authorization ID, it first checks for SYSADM or SYSCTRL authority. If the check is successful, no more checking is necessary because SYSCTRL authority (for non-user tables) or SYSADM authority satisfies the requirement that the view owner has the SELECT privilege for all tables and views that the view might be based on. This is indicated by a blank value and EXPLRC1=0. If the authorization ID does not have SYSADM or SYSCTRL authority, the exit checks if the view creator has DBADM on each database of the tables that the view is based on because the DBADM authority on the database of the base table satisfies the requirement that the view owner has the SELECT privilege for all base tables in that database. XAPLRSV5 D Character, 3 none bytes Reserved.
XAPLOWNQ, XAPLREL1 and XAPLREL2 might further qualify the object or may provide additional information that can be used in determining authorization for certain privileges. These privileges and the contents of XAPLOWNQ, XAPLREL1 and XAPLREL2 are shown in Table 232. # Table 232. Related information for certain privileges # # Privilege # 0263 (USAGE) # # # # #
0064 0265 0266 0267 (EXECUTE) (START) (STOP) (DISPLAY) Object type (XAPLTYPE) E F XAPLOWNQ Address of schema name Address of schema name XAPLREL1 Address of distinct type owner XAPLREL2 Contains binary zeroes
Address of Contains binary user-defined function zeroes owner Address of JAR owner Contains binary zeroes Address of package owner Contains binary zeroes Address of package owner Address of package owner Contains binary zeroes Contains binary zeroes Contains binary zeroes Address of version ID Contains binary zeroes Contains binary zeroes
# 0263 (USAGE) # # 0064 (EXECUTE) # # 0065 (BIND) # # 0073 (DROP) # # | 0097 (COMMENT) | # # 0225 (COPY ON PKG) #
J K K K K K
Address of schema name Address of collection ID Address of collection ID Address of collection ID Address of collection ID Address of collection ID
1075
# Table 232. Related information for certain privileges (continued) # # Privilege # 0228 (ALLPKAUT) # # 0229 (SUBPKAUT) # # 0252 (ALTERIN) # 0097 (COMMENT) # 0252 (DROPIN) # # # # #
0064 0265 0266 0267 (EXECUTE) (START) (STOP) (DISPLAY) Object type (XAPLTYPE) K K M XAPLOWNQ Address of collection ID Address of collection ID Address of schema name XAPLREL1 Contains binary zeroes Contains binary zeroes Address of object owner XAPLREL2 Contains binary zeroes Contains binary zeroes Contains binary zeroes
# 0065 (BIND) # # | 0097 (COMMENT) | # # | 0061 (ALTER) # | 0263 (USAGE) # 0061 (ALTER) # # 0073 (DROP) # # 0087 (USE) # # 0053 (UPDATE) # 0054 (REFERENCES) # # # # # # # # # # # # # # # # | #
0022 (CATMAINT CONVERT) 0050 (SELECT) 0051 (INSERT) 0052 (DELETE) 0055 (TRIGGER) 0056 (CREATE INDEX) 0061 (ALTER) 0073 (DROP) 0075 (LOAD) 0076 (CHANGE NAME QUALIFIER) 0097 (COMMENT) 0098 (LOCK) 0233 (ANY TABLE PRIVILEGE) 0251 (RENAME) 0275 (REFRESH)
P P Q R R R T T
Address of plan owner Address of plan owner Address of schema name Address of database name Address of database name Address of database name Address of table name qualifier Address of table name qualifier
Contains binary zeroes Contains binary zeroes Address of sequence name Contains binary zeroes Contains binary zeroes Contains binary zeroes Address of column name, if applicable Contains binary zeroes
Contains binary zeroes Contains binary zeroes Contains binary zeroes Contains binary zeroes Contains binary zeroes Contains binary zeroes Address of database name Address of database name
# 0020 (DROP ALIAS) # 0104 (DROP SYNONYM) # 0103 (ALTER INDEX) # 0105 (DROP INDEX) # 0274 (COMMENT ON # INDEX)
T T
1076
Administration Guide
# Table 232. Related information for certain privileges (continued) # # Privilege # 0227 (BIND AGENT) # # | 0015 (CREATE ALIAS) | # | | # | 0053 (UPDATE) | # # | # | | | | | |
0050 (SELECT) 0051 (INSERT) 0052 (DELETE) 0073 (DROP) 0097 (COMMENT) 0233 (ANY TABLE PRIVILEGE) Object type (XAPLTYPE) U U XAPLOWNQ XAPLREL1 XAPLREL2 Contains binary zeroes Address of database name, if the alias is on a table Contains binary zeroes Contains binary zeroes
Address of package Contains binary owner zeroes Contains binary zeroes Contains binary zeroes
V V
# | 0061 (ALTER) | # #
The data types and field lengths of the information shown in Table 232 on page 1075 is shown in Table 233. # # # # # # # # # # # # # # # | | | #
Table 233. Data types and field lengths Resource name or other Database name Table name qualifier Object name qualifier Column name Collection ID Plan owner Package owner Package version ID Schema name Distinct type owner JAR owner User-defined function owner Procedure owner View name qualifier Sequence owner Sequence name Type Character Character Character Character Character Character Character Character Character Character Character Character Character Character Character Character Length 8 VARCHAR(128) VARCHAR(128) VARCHAR(128) VARCHAR(128) VARCHAR(128) VARCHAR(128) VARCHAR(64) VARCHAR(128) VARCHAR(128) VARCHAR(128) VARCHAR(128) VARCHAR(128) VARCHAR(128) VARCHAR(128) VARCHAR(128)
1077
See Exception processing on page 1079 for an explanation of how the EXPLRC1 value affects DB2 processing. Return codes during termination: DB2 does not check EXPLRC1 on return from the exit routine. Return codes during authorization check: Make sure that EXPLRC1 has one of the values that are shown in Table 235 during the authorization check.
Table 235. Required values in EXPLRC1 during authorization check Value 0 4 8 12 Meaning Access permitted. Unable to determine; perform DB2 authorization checking. Access denied. Unable to service request; dont call exit again.
See Exception processing on page 1079 for an explanation of how the EXPLRC1 value affects DB2 processing. On authorization failures, the return code is included in the IFCID 0140 trace record.
1078
Administration Guide
Table 236. Reason codes during initialization Value -1 16 Meaning Identifies the default exit routine shipped with DB2. If you replace or modify the default exit, you should not use this value. Indicates to DB2 that it should terminate if the exit routine returns EXPLRC1=12, an invalid EXPLRC1 or abnormally terminates during initialization or authorization checking. When the exit routine sets the reason code to 16, DB2 does an immediate shutdown, without waiting for tasks to end. For long-running tasks, an immediate shutdown can mean that recovery times are long. Ignored by DB2.
Other
Reason codes during authorization check: Field EXPLRC2 lets you put in any code that would be of use in determining why the authorization check in the exit routine failed. On authorization failures, the reason code is included in the IFCID 0140 trace record.
Exception processing
During initialization or authorization checking, DB2 issues diagnostic message DSNX210I to the operators console, if one of the following conditions occur: v The authorization exit returns a return code of 12 or an invalid return code. v The authorization exit abnormally terminates. Additional actions that DB2 performs depend on the reason code that the exit returns during initialization. Table 237 on page 1080 summarizes these actions.
1079
Table 237. How an error condition affects DB2 actions during initialization and authorization checking Reason code other than 16 or -1 Reason code of 16 returned by returned by exit routine during exit routine during initialization initialization1 v The task2 abnormally terminates with reason code 00E70015 v DB2 terminates Invalid return code v The task2 abnormally terminates with reason code 00E70015 v DB2 terminates Abnormal termination during initialization DB2 terminates You can use the subsystem parameter AEXITLIM3 to control how DB2 and the exit behave. Example: If you set AEXITLIM to 10, the exit routine continues to run after the first 10 abnormal terminations. On the eleventh abnormal termination, the exit stops and DB2 terminates. v The task2 abnormally terminates with reason code 00E70009 v DB2 switches to DB2 authorization checking v The task2 abnormally terminates with reason code 00E70009 v DB2 switches to DB2 authorization checking DB2 switches to DB2 authorization checking You can use the subsystem parameter AEXITLIM to control how DB2 and the exit behave. Example: If you set AEXITLIM to 10, the exit routine continues to run after the first 10 abnormal terminations. On the eleventh abnormal termination, the exit routine stops and DB2 switches to DB2 authorization checking.
| # | | | | | | | | | | | | | | # #
Notes: 1. During initialization, DB2 sets a value of -1 to identify the default exit. The user exit routine should not set the reason code to -1. 2. During initialization, the task is DB2 startup. During authorization checking, the task is the application. 3. AEXITLI (authorization exit limit) can be updated online. Refer to SET SYSPARM in DB2 Command Reference.
1080
Administration Guide
3. Use the authorization ID to issue a SELECT statement on the table. The SELECT statement should fail. 4. Format the trace data and examine the return code (QW0140RC) in the IFCID 0140 trace record. v QW0140RC = 1 indicates that DB2 performed the authorization check and denied access. v QW0140RC = 8 indicates that the external security system performed the authorization check and denied access.
Edit routines
Edit routines are assigned to a table by the EDITPROC clause of CREATE TABLE. An edit routine receives the entire row of the base table in internal DB2 format; it can transform that row when it is stored by an INSERT or UPDATE SQL statement, or by the LOAD utility. It also receives the transformed row during retrieval operations and must change it back to its original form. Typical uses are to compress the storage representation of rows to save space on DASD and to encrypt the data. You cannot use an edit routine on a table that contains a LOB or a ROWID column. # # You cannot use EDITPROC on a table with a column name that is longer than 18 EBCDIC bytes. If you do, you will receive an error message. The transformation your edit routine performs on a row (possibly encryption or compression) is called edit-encoding. The same routine is used to undo the transformation when rows are retrieved; that operation is called edit-decoding. Important: The edit-decoding function must be the exact inverse of the edit-encoding function. For example, if a routine encodes ALABAMA to 01, it must decode 01 to ALABAMA. A violation of this rule can lead to an abend of the DB2 connecting thread, or other undesirable effects. Your edit routine can encode the entire row of the table, including any index keys. However, index keys are extracted from the row before the encoding is done, therefore, index keys are stored in the index in edit-decoded form. Hence, for a table with an edit routine, index keys in the table are edit-coded; index keys in the index are not edit-coded. The sample application contains a sample edit routine, DSN8EAE1. To print it, use ISPF facilities, IEBPTPCH, or a program of your own. Or, assemble it and use the assembly listing. There is also a sample routine that does Huffman data compression, DSN8HUFF in library prefix.SDSNSAMP. That routine not only exemplifies the use of the exit parameters, it also has potentially some use for data compression. If you intend to use the routine in any production application, please pay particular attention to the warnings and restrictions given as comments in the code. You might prefer to let DB2 compress your data. For instructions, see Compressing your data on page 708. General considerations for writing exit routines on page 1108 applies to edit routines.
1081
EDITROW
4 8
Address Signed 4-byte integer Signed 4-byte integer Address Signed 4-byte integer
C 10 14
EDITOPTR
18
Address
1082
Administration Guide
Output row
Input row
Figure 150. How the edit exit parameter list points to row information. The address of the nth column description is given by: RFMTAFLD + (n1)(FFMTEFFMT); see Parameter list for row format descriptions on page 1112.
1083
If the function fails, the routine might also leave a reason code in EXPLRC2. DB2 returns SQLCODE -652 (SQLSTATE 23506) to the application program and puts the reason code in field SQLERRD(6) of the SQL communication area (SQLCA).
Validation routines
Validation routines are assigned to a table by the VALIDPROC clause of CREATE TABLE and ALTER TABLE. A validation routine receives an entire row of a base table as input, and can return an indication of whether or not to allow a following INSERT, UPDATE, or DELETE operation. Typically, a validation routine is used to impose limits on the information that can be entered in a table; for example, allowable salary ranges, perhaps dependent on job category, for the employee sample table. Although VALIDPROCs can be specified for a table that contains a LOB column, the LOB values are not passed to the validation routine. The indicator column takes the place of the LOB column. # # You cannot use VALIDPROC on a table with a column name that is longer than 18 EBCDIC bytes. If you do, you will receive an error message. The return code from a validation routine is checked for a 0 value before any insert, update, or delete is allowed. General considerations for writing exit routines on page 1108 applies to validation routines.
1084
Administration Guide
Signed 4-byte integer Signed 4-byte integer Address Signed 4-byte integer Signed 4-byte integer Character, 8 bytes Unsigned 1-byte integer
RVALFL1
25
Character, 1 byte
RVALCSTC
26
Character, 2 bytes
1085
If the operation is not allowed, the routine might also leave a reason code in EXPLRC2. DB2 returns SQLCODE -652 (SQLSTATE 23506) to the application program and puts the reason code in field SQLERRD(6) of the SQL communication area (SQLCA). Figure 151 on page 1087 shows how the parameter list points to other information.
1086
Administration Guide
EXPL Address of work area Length of work area Reserved Parameter list Return code Reason code Row descriptions Number of columns in row (n) Address of column list Row type Column descriptions Work area (256 bytes)
Reserved Address of row description Reserved Length of input row to be validated Address of input row to be validated
. . .
Column length Data type Input row Data attribute Column name ...n
Figure 151. How a validation parameter list points to information. The address of the nth column description is given by: RFMTAFLD + (n1)(FFMTEFFMT); see Parameter list for row format descriptions on page 1112.
1087
Example: Suppose that you want to insert and retrieve dates in a format like September 21, 1992. You can use a date routine that transforms the date to a format that is recognized by DB2 on insertion, such as ISO: 2004-09-21. On retrieval, the routine can transform 2004-09-21 to September 21, 2004. You can have either a date routine, a time routine, or both. These routines do not apply to timestamps. Special rules apply if you execute queries at a remote DBMS, through the distributed data facility. For that case, see DB2 SQL Reference.
1088
Administration Guide
DB2 checks that the value supplied by the exit routine represents a valid date or time in some recognized format, and then converts it into an internal format for storage or comparison. If the value is entered into a column that is a key column in an index, the index entry is also made in the internal format. On retrieval: A date or time routine can be invoked to change a value from ISO to the locally-defined format when a date or time value is retrieved by a SELECT or FETCH statement. If LOCAL is the default, the routine is always invoked unless overridden by a precompiler option or by the CHAR function, as by specifying CHAR(HIREDATE, ISO); that specification always retrieves a date in ISO format. If LOCAL is not the default, the routine is invoked only when specifically called for by CHAR, as in CHAR(HIREDATE, LOCAL); that always retrieves a date in the format supplied by your date exit routine. On retrieval, the exit is invoked after any edit routine or DB2 sort. A date or time routine is not invoked for a DELETE operation without a WHERE clause that deletes an entire table in a segmented table space.
DTXPLN
Address
DTXPLOC DTXPISO
8 C
Address Address
1089
Table 244. Required return code in EXPLRC1 Value 0 4 8 Meaning No errors; conversion was completed. Invalid date or time value. Input value not in valid format; if the function is insertion, and LOCAL is the default, DB2 next tries to interpret the data as a date or time in one of the recognized formats (EUR, ISO, JIS, or USA). Error in exit routine.
12
Figure 152 shows how the parameter list points to other information.
Register 1 Address of EXPL Address of parameter list Parameter list Address of function code Address of format length Address of LOCAL value Address of ISO value Length of local format ISO value LOCAL value Function code: Function to be performed EXPL Address of work area Length of work area Return code Work area (512 bytes)
Figure 152. How a date or time parameter list points to other information
Conversion procedures
A conversion procedure is a user-written exit routine that converts characters from one coded character set to another coded character set. (For a general discussion of character sets, and definitions of those terms, see Appendix A ofDB2 Installation Guide.) In most cases, any conversion that is needed can be done by routines provided by IBM. The exit for a user-written routine is available to handle exceptions. General considerations for writing exit routines on page 1108 applies to conversion routines.
1090
Administration Guide
TRANSTYPE
The nature of the conversion. Values can be: GG ASCII GRAPHIC to EBCDIC GRAPHIC MM EBCDIC MIXED to EBCDIC MIXED MP EBCDIC MIXED to ASCII MIXED MS EBCDIC MIXED to EBCDIC SBCS PM ASCII MIXED to EBCDIC MIXED PP ASCII MIXED to ASCII MIXED PS ASCII MIXED to EBCDIC SBCS SM EBCDIC SBCS to EBCDIC MIXED SP SBCS (ASCII or EBCDIC) to ASCII MIXED SS EBCDIC SBCS to EBCDIC SBCS The name of your conversion procedure. Must be N.
TRANSPROC IBMREQD
DB2 does not use the following columns, but checks them for the allowable values listed. Values you insert can be used by your routine in any way. If you insert no value in one of these columns, DB2 inserts the default value listed. ERRORBYTE SUBBYTE TRANSTAB Any character, or null. The default is null. Any character not equal to the value of ERRORBYTE, or null. The default is null. Any character string of length 256 or the empty string. The default is an empty string.
1091
Table 245. Format of string value descriptor for a conversion procedure Name FPVDTYPE Hex offset 0 Data type Signed 2-byte integer Description Data type of the value: Code 20 28 Means VARCHAR VARGRAPHIC
FPVDVLEN FPVDVALE
2 4
The maximum length of the string The string. The first halfword is the string's actual length in characters. If the string is ASCII MIXED data, it is padded out to the maximum length by undefined bytes.
The row from SYSSTRINGS: The row copied from the catalog table SYSIBM.SYSSTRINGS is in the standard DB2 row format described in Row formats for edit and validation routines on page 1110. The fields ERRORBYTE and SUBBYTE each include a null indicator. The field TRANSTAB is of varying length and begins with a 2-byte length field.
For the remaining codes that are shown in Table 247, DB2 does not use the converted string.
Table 247. Remaining codes for the FPVDVALE Code 8 12 16 20 24 Meaning Length exception Invalid code point Form exception Any other error Invalid CCSID
1092
Administration Guide
Exception conditions: Return a length exception (code 8) when the converted string is longer than the maximum length allowed. For an invalid code point (code 12), place the 1- or 2-byte code point in field EXPLRC2 of the exit parameter list. Return a form exception (code 16) for EBCDIC MIXED data when the source string does not conform to the rules for MIXED data. Any other uses of codes 8 and 16, or of EXPLRC2, are optional. Error conditions: On return, DB2 considers any of the following conditions as a conversion error: v EXPLRC1 is greater than 16. v EXPLRC1 is 8, 12, or 16 and the operation that required the conversion is not an assignment of a value to a host variable with an indicator variable. v FPVDTYPE or FPVDVLEN has been changed. v The length control field of FPVDVALE is greater than the original value of FPVDVLEN or is negative. In the case of a conversion error, DB2 sets the SQLERRMC field of the SQLCA to HEX(EXPLRC1) CONCAT X'FF' CONCAT HEX(EXPLRC2). Figure 153 shows how the parameter list points to other information.
Register 1 Address of EXPL Address of string value list Address of SYSSTRINGS row copy EXPL Address of work area Length of work area Reserved Return code Invalid code String value descriptor Data type of string Maximum string length String length String value Copy of row from SYSIBM.SYSSTRINGS Work area
Field procedures
Field procedures are assigned to a table by the FIELDPROC clause of CREATE TABLE and ALTER TABLE. A field procedure is a user-written exit routine to transform values in a single short-string column. When values in the column are changed, or new values inserted, the field procedure is invoked for each value, and can transform that value (encode it) in any way. The encoded value is then stored. When values are retrieved from the column, the field procedure is invoked for each value, which is encoded, and must decode it back to the original string value.
Appendix B. Writing exit routines
1093
Any indexes, including partitioned indexes, defined on a column that uses a field procedure are built with encoded values. For a partitioned index, the encoded value of the limit key is put into the LIMITKEY column of the SYSINDEXPART table. Hence, a field procedure might be used to alter the sorting sequence of values entered in a column. For example, telephone directories sometimes require that names like McCabe and MacCabe appear next to each other, an effect that the standard EBCDIC sorting sequence does not provide. And languages that do not use the Roman alphabet have similar requirements. However, if a column is provided with a suitable field procedure, it can be correctly ordered by ORDER BY. The transformation your field procedure performs on a value is called field-encoding. The same routine is used to undo the transformation when values are retrieved; that operation is called field-decoding. Values in columns with a field procedure are described to DB2 in two ways: 1. The description of the column as defined in CREATE TABLE or ALTER TABLE appears in the catalog table SYSIBM.SYSCOLUMNS. That is the description of the field-decoded value, and is called the column description. 2. The description of the encoded value, as it is stored in the data base, appears in the catalog table SYSIBM.SYSFIELDS. That is the description of the field-encoded value, and is called the field description. Important: The field-decoding function must be the exact inverse of the field-encoding function. For example, if a routine encodes 'ALABAMA' to '01', it must decode '01' to 'ALABAMA'. A violation of this rule can lead to an abend of the DB2 connecting thread, or other undesirable effects. General considerations for writing exit routines on page 1108 applies to field procedures.
1094
Administration Guide
You cannot use a field procedure on a LOB or a ROWID column. Field procedures can be specified for other columns of a table that contains a LOB or ROWID column. # # You cannot use a field procedure on a column if the column name is longer than 18 EBCDIC bytes. If you do, you will receive an error message. The optional parameter list that follows the procedure name is a list of constants, enclosed in parentheses, called the literal list. The literal list is converted by DB2 into a data structure called the field procedure parameter value list (FPPVL). That structure is passed to the field procedure during the field-definition operation. At that time, the procedure can modify it or return it unchanged. The output form of the FPPVL is called the modified FPPVL; it is stored in the DB2 catalog as part of the field description. The modified FPPVL is passed again to the field procedure whenever that procedure is invoked for field-encoding or field-decoding.
| |
1095
double-byte characters, as needed) up to the length of the longer string. If the shorter string is the value of a column with a field procedure, the padding is done to the encoded value, but the pad character is not encoded. Therefore, if the procedure changes blanks to some other character, encoded blanks at the end of the longer string are not equal to padded blanks at the end of the shorter string. That situation can lead to errors; for example, some strings that ought to be equal might not be recognized as such. Therefore, encoding blanks in a field procedure is not recommended.
1096
Administration Guide
Register 1
FPPL Work address FPIB address CVD address FVD address FPPVL address
Work area
1097
2 4 6 8 C
Signed 2-byte integer Signed 2-byte integer Character, 2 bytes Character, 4 bytes Address
Length of work area; the maximum is 32767 bytes. Reserved Return code set by field procedure Reason code set by field procedure Address of a 40-byte area, within the work area or within the field procedure's static area, containing an error message
FPPVCNT
FPPVVDS
Structure
1098
Administration Guide
v During field-definition, they describe each constant in the field procedure parameter value list (FPPVL). The set of these value descriptors is part of the FPPVL control block. v During field-encoding and field-decoding, the decoded (column) value and the encoded (field) value are described by the column value descriptor (CVD) and the field value descriptor (FVD). The column value descriptor (CVD) contains a description of a column value and, if appropriate, the value itself. During field-encoding, the CVD describes the value to be encoded. During field-decoding, it describes the decoded value to be supplied by the field procedure. During field-definition, it describes the column as defined in the CREATE TABLE or ALTER TABLE statement. The field value descriptor (FVD) contains a description of a field value and, if appropriate, the value itself. During field-encoding, the FVD describes the encoded value to be supplied by the field procedure. During field-decoding, it describes the value to be decoded. Field-definition must put into the FVD a description of the encoded value. Value descriptors have the format shown in Table 250.
Table 250. Format of value descriptors Name FPVDTYPE Hex offset 0 Data type Signed 2-byte integer Description Data type of the value: Code 0 4 8 12 16 20 24 28 Means INTEGER SMALLINT FLOAT DECIMAL CHAR VARCHAR GRAPHIC VARGRAPHIC
FPVDVLEN
v For a varying-length string value, its maximum length v For a decimal number value, its precision (byte 1) and scale (byte 2) v For any other value, its length
FPVDVALE
None
The value. The value is in external format, not DB2 internal format. If the value is a varying-length string, the first halfword is the value's actual length in bytes. This field is not present in a CVD, or in an FVD used as input to the field-definition operation. An empty varying-length string has a length of zero with no data following.
1099
On entry
The registers have the information that is listed in Table 251:
Table 251. Contents of the registers on entry Register 1 2 through 12 13 14 15 Contains Address of the field procedure parameter list (FPPL); see Figure 154 on page 1097 for a schematic diagram. Unknown values that must be restored on exit. Address of the register save area. Return address. Address of entry point of exit routine.
The contents of all other registers, and of fields not listed in the following tables, are unpredictable. The work area consists of 512 contiguous uninitialized bytes. The FPIB has the information that is listed in Table 252:
Table 252. Contents of the FPIB on entry Field FPBFCODE FPBWKLN Contains 8, the function code 512, the length of the work area
The FPVDVALE field is omitted. The FVD provided is 4 bytes long. The FPPVL has the information that is listed in Table 254:
Table 254. Contents of the FPPVL on entry Field FPPVLEN FPPVCNT Contains The length, in bytes, of the area containing the parameter value list. The minimum value is 254, even if there are no parameters. The number of value descriptors that follow; zero if there are no parameters.
1100
Administration Guide
Table 254. Contents of the FPPVL on entry (continued) Field FPPVVDS Contains A contiguous set of value descriptors, one for each parameter in the parameter value list, each preceded by a 4-byte length field.
On exit
The registers must have the information that is listed in Table 255:
Table 255. Contents of the registers on exit Register 2 through 12 15 Contains The values that they contained on entry. The integer zero if the column described in the CVD is valid for the field procedure; otherwise the value must not be zero.
The following fields must be set as shown; all other fields must remain as on entry. The FPIB must have the information that is listed in Table 256:
Table 256. Contents of the FPIB on exit Field FPBWKLN Contains The length, in bytes, of the work area to be provided to the field-encoding and field-decoding operations; 0 if no work area is required. An optional 2-byte character return code, defined by the field procedure; blanks if no return code is given. An optional 4-byte character reason code, defined by the field procedure; blanks if no reason code is given. Optionally, the address of a 40-byte error message residing in the work area or in the field procedure's static area; zeros if no message is given.
Errors signalled by a field procedure result in SQLCODE -681 (SQLSTATE 23507), which is set in the SQL communication area (SQLCA). The contents of FPBRTNC and FPBRSNC, and the error message pointed to by FPBTOKP, are also placed into the tokens, in SQLCA, as field SQLERRMT. The meaning of the error message is determined by the field procedure. The FVD must have the information that is listed in Table 257:
Table 257. Contents of the FVD on exit Field FPVDTYPE FPVDVLEN Contains The numeric code for the data type of the field value. Any of the data types listed in Table 250 on page 1099 is valid. The length of the field value.
Field FPVDVALE must not be set; the length of the FVD is 4 bytes only. The FPPVL can be redefined to suit the field procedure, and returned as the modified FPPVL, subject to the following restrictions: v The field procedure must not increase the length of the FPPVL.
1101
v FPPVLEN must contain the actual length of the modified FPPVL, or 0 if no parameter list is returned. The modified FPPVL is recorded in the catalog table SYSIBM.SYSFIELDS, and is passed again to the field procedure during field-encoding and field-decoding. The modified FPPVL need not have the format of a field procedure parameter list, and it need not describe constants by value descriptors.
On entry
The registers have the information that is listed in Table 258:
Table 258. Contents of the registers on entry Register 1 2 through 12 13 14 15 Contains Address of the field procedure parameter list (FPPL); see Figure 154 on page 1097 for a schematic diagram. Unknown values that must be restored on exit. Address of the register save area. Return address. Address of entry point of exit routine.
The contents of all other registers, and of fields not listed, are unpredictable. The work area is contiguous, uninitialized, and of the length specified by the field procedure during field-definition. The FPIB has the information that is listed in Table 259:
Table 259. Contents of the FPIB on entry Field FPBFCODE FPBWKLN Contains 0, the function code The length of the work area
1102
Administration Guide
The modified FPPVL, produced by the field procedure during field-definition, is provided.
On exit
The registers have the information that is listed in Table 262:
Table 262. Contents of the registers on exit Register 2 through 12 15 Contains The values that they contained on entry. The integer zero if the column described in the CVD is valid for the field procedure; otherwise the value must not be zero.
The FVD must contain the encoded (field) value in field FPVDVALE. If the value is a varying-length string, the first halfword must contain its length. The FPIB can have the information that is listed in Table 263:
Table 263. Contents of the FPIB on exit Field FPBRTNC FPBRSNC FPBTOKP Contains An optional 2-byte character return code, defined by the field procedure; blanks if no return code is given. An optional 4-byte character reason code, defined by the field procedure; blanks if no reason code is given. Optionally, the address of a 40-byte error message residing in the work area or in the field procedure's static area; zeros if no message is given.
Errors signalled by a field procedure result in SQLCODE -681 (SQLSTATE 23507), which is set in the SQL communication area (SQLCA). The contents of FPBRTNC and FPBRSNC, and the error message pointed to by FPBTOKP, are also placed into the tokens, in SQLCA, as field SQLERRMT. The meaning of the error message is determined by the field procedure. All other fields must remain as on entry.
1103
On entry
The registers have the information that is listed in Table 264:
Table 264. Contents of the registers on entry Register 1 2 through 12 13 14 15 Contains Address of the field procedure parameter list (FPPL); see Figure 154 on page 1097 for a schematic diagram. Unknown values that must be restored on exit. Address of the register save area. Return address. Address of entry point of exit routine.
The contents of all other registers, and of fields not listed, are unpredictable. The work area is contiguous, uninitialized, and of the length specified by the field procedure during field-definition. The FPIB has the information that is listed in Table 265:
Table 265. Contents of the FPIB on entry Field FPBFCODE FPBWKLN Contains 4, the function code The length of the work area
The modified FPPVL, produced by the field procedure during field-definition, is provided.
1104
Administration Guide
On exit
The registers have the information that is listed in Table 268:
Table 268. Contents of the registers on exit Register 2 through 12 15 Contains The values they contained on entry. The integer zero if the column described in the FVD is valid for the field procedure; otherwise the value must not be zero.
The CVD must contain the decoded (column) value in field FPVDVALE. If the value is a varying-length string, the first halfword must contain its length. The FPIB can have the information that is listed in Table 269:
Table 269. Contents of the FPIB on exit Field FPBRTNC FPBRSNC FPBTOKP Contains An optional 2-byte character return code, defined by the field procedure; blanks if no return code is given. An optional 4-byte character reason code, defined by the field procedure; blanks if no reason code is given. Optionally, the address of a 40-byte error message residing in the work area or in the field procedure's static area; zeros if no message is given.
Errors signalled by a field procedure result in SQLCODE -681 (SQLSTATE 23507), which is set in the SQL communication area (SQLCA). The contents of FPBRTNC and FPBRSNC, and the error message pointed to by FPBTOKP, are also placed into the tokens, in SQLCA, as field SQLERRMT. The meaning of the error message is determined by the field procedure. All other fields must remain as on entry.
1105
The module is loaded during DB2 initialization and deleted during DB2 termination. You must link the module into either the prefix.SDSNEXIT or the DB2 prefix.SDSNLOAD library. Specify the REPLACE parameter of the link-edit job to replace a module that is part of the standard DB2 library for this release. The module should have attributes AMODE(31) and RMODE(ANY).
1106
Administration Guide
Table 270. Log capture routine specific parameter list Name LOGXEYE LOGXLNG Hex offset 00 04 06 08 LOGXTYPE 10 Character, 1 byte Data type Character, 4 bytes Signed 2-byte integer Description Eye catcher: LOGX Length of parameter list Reserved Reserved Situation identifier: I Initialization W Write T Termination P Partial control interval (CI) call Mode identifier. X00 SRB mode X01 TCB mode First log RBA, set when DB2 is started. The value remains constant while DB2 is active. Highest log archive RBA used. The value is updated after completion of each log archive operation. Reserved Character, 8 bytes Signed 4-byte integer Character, 4 bytes Character, 8 bytes Character, 3 bytes Character, 1 byte Range of consecutive log buffers: Address of first log buffer Address of last log buffer Length of single log buffer (constant 4096) DB2 subsystem ID, 4 characters left justified DB2 subsystem startup time (TIME format with DEC option: 0CYYDDDFHHMMSSTH) DB2 subsystem release level Maximum number of buffers that can be passed on one call. The value remains constant while DB2 is active. Reserved First word of a doubleword work area for the user routine. (The content is not changed by DB2.) Second word of user work area.
LOGXFLAG
11
Hex
LOGXSRBA LOGXARBA
12 18
1E LOGXRBUF 20
28 2C 30
LOGXREL LOGXMAXB
38 3B
3C LOGXUSR1 44
LOGXUSR2
48
1107
# # # # # # # # # #
You can enable dynamic plan allocation by using one of the following techniques: v Use DB2 packages and versioning to manage the relationship between CICS transactions and DB2 plans. This technique can help minimize plan outage time, processor time, and catalog contention. v Use a dynamic plan exit routine to determine the plan to use for each CICS transaction. Recommendation: Use DB2 packages and versioning, instead of a CICS dynamic plan exit routine, for dynamic plan allocation. For more information on using packages, see CICS Transaction Server for z/OS: DB2 Guide. For guidance on using packages, see DB2 Application Programming and SQL Guide.
1108
Administration Guide
v It must be written to be reentrant and must restore registers before return. v It must be link-edited with the REENTRANT parameter. v In the MVS/ESA environment, it must be written and link-edited to execute AMODE(31),RMODE(ANY). v It must not invoke any DB2 servicesfor example, through SQL statements. v It must not invoke any SVC services or ESTAE routines. Even though DB2 has functional recovery routines of its own, you can establish your own functional recovery routine (FRR), specifying MODE=FULLXM and EUT=YES.
13 14 15
1109
Figure 155. Use of register 1 on invoking an exit routine. (Field procedures and translate procedures do not use the standard exit-specific parameter list.)
Table 273 shows the EXPL parameter list. Its description is given by macro DSNDEXPL.
Table 273. Contents of EXPL parameter list Name EXPLWA EXPLWL Hex offset 0 4 Data type Address Signed 4-byte integer Description Address of a work area to be used by the routine Length of the work area. The value is: 2048 for connection routines and sign-on routines 512 for date and time routines and translate procedures (see Note 1). 256 for edit, validation, and log capture routines Reserved Return code Reason code Used only by connection routines and sign-on routines Used only by connection routines and sign-on routines Used only by connection routines and sign-on routines Used only by connection routines and sign-on routines
8 A C 10 14 1C 24
Signed 2-byte integer Signed 2-byte integer Signed 4-byte integer Signed 4-byte integer Character, 8 bytes Character, 8 bytes Character, 8 bytes
Notes: 1. When translating a string of type PC MIXED, a translation procedure has a work area of 256 bytes plus the length attribute of the string.
1110
Administration Guide
Null values for edit procedures, field procedures, and validation routines
If null values are allowed for a column, an extra byte is stored before the actual column value. This byte is X'00' if the column value is not null; it is X'FF' if the value is null. The extra byte is included in the column length attribute (parameter FFMTFLEN in Table 281 on page 1113).
There are no gaps after varying-length columns. Hence, columns that appear after varying-length columns are at variable offsets in the row. To get to such a column, you must scan the columns sequentially after the first varying-length column. An empty string has a length of zero with no data following.
1111
ROWID and indicator columns are treated like varying length columns. Row IDs are VARCHAR(17). An indicator columns is VARCHAR(4); it is stored in a base table in place of a LOB column, and indicates whether the LOB value for the column is null or zero length.
An empty string has a length of one, a X'00' null indicator, and no data following.
Table 278 shows the TIME format, which consists of 3 total bytes.
Table 278. TIME format Hours 1 byte Minutes 1 byte Seconds 1 byte
Table 279 shows the TIMESTAMP format, which consists of 10 total bytes.
Table 279. TIMESTAMP format Year 2 bytes Month 1 byte Day 1 byte Hours 1 byte Minutes 1 byte Seconds 1 byte Microseconds 3 bytes
1112
Administration Guide
Table 280. Description of a row format Name RFMTNFLD RFMTAFLD Hex offset 0 4 Data type Signed fullword integer Address Description Number of columns in a row Address of a list of column descriptions. The format of each column is shown in Table 281. Row type: X'00' = row with fixed-length columns X'04' = row with varying-length columns Reserved
RFMTTYPE
Character, 1 byte
Character, 3 bytes
FFMTFNAM
Character, 18 bytes
Table 282 shows a description of data type codes and length attributes.
Table 282. Description of data type codes and length attributes Data type INTEGER SMALLINT FLOAT (single precision) FLOAT (double precision) DECIMAL CHAR VARCHAR DATE TIME TIMESTAMP ROWID INDICATOR COLUMN Code (FFMTFTYP) X'00' X'04' X'08' X'08' X'0C' X'10' X'14' X'20' X'24' X'28' X'2C' X'30' Length attribute (FFMTFLEN) 4 2 4 8 INTEGER(p/2), where p is the precision The length of the string The length of the string 4 3 10 17 4
1113
Invert the sign bit (high order bit). Value 800001F2 7FFFFF85 Meaning 000001F2 (+498 decimal) FFFFFF85 (-123 decimal)
FLOAT
If the sign bit (high order bit) is 1, invert only that bit. Otherwise, invert all bits. Value C110000000000000 3EEFFFFFFFFFFFFF Meaning 4110000000000000 (+1.0 decimal) C110000000000000 (-1.0 decimal)
DECIMAL
Save the high-order hexadecimal digit (sign digit). Shift the number to the left one hexadecimal digit. If the sign digit is X'F', put X'C' in the low-order position. Otherwise, invert all bits in the number and put X'D' in the low-order position. Value F001 0FFE Meaning 001C (+1) 001D (-1)
| | | | | | | | |
1114
Administration Guide
1115
A log record is identifiable by the RBA of the first byte of its header; that RBA is called the relative byte address of the record. The record RBA is like a timestamp because it uniquely identifies a record that starts at a particular point in the continuing log. | | | In the data sharing environment, each member has its own log. The log record sequence number (LRSN) uniquely identifies the log records of a data sharing member. The LRSN is a 6-byte hexadecimal value derived from a store clock timestamp. DB2 uses the LRSN for recovery in the data sharing environment. Effects of ESA data compression: Log records can contain compressed data if a table contains compressed data. For example, if the data in a DB2 row are compressed, all data logged because of changes to that row (resulting from inserts, updates and deletes) are compressed. If logged, the record prefix is not compressed, but all of the data in the record are in compressed format. Reading compressed data requires access to the dictionary that was in use when the data was compressed.
1116
Administration Guide
Exception states: DBET log records register whether any database, table space, index space, or partition is in an exception state. To list all objects in a database that are in an exception state, use the command DISPLAY DATABASE (database name) RESTRICT. For a further explanation of the list produced and of the exception states, see the description of message DSNT392I in Part 2 of DB2 Messages. Image copies of special table spaces: Image copies of DSNDB01.SYSUTILX, DSNDB01.DBD01, and DSNDB06.SYSCOPY are registered in the DBET log record rather than in SYSCOPY. During recovery, they are recovered from the log, and then image copies of other table spaces are located from the recovered SYSCOPY.
6. End Phase 2
1117
Table 285 shows the log records for processing and rolling back an insertion.
Table 285. Log records written for rolling back an insertion Type of record 1. Begin_UR 2. Undo/Redo for data Information recorded Beginning of the unit of recovery. Insertion of data. Includes the database ID (DBID), page set ID, page number, internal record identifier, and the data inserted. Beginning of the rollback process.
3. Begin_Abort
4. Compensation Redo/Undo Backing-out of data. Includes the database ID (DBID), page set ID, page number, internal record ID (RID), and data to undo the previous change. 5. End_Abort End of the unit of recovery, with rollback complete.
Delete data
Note: If an update occurs to a table defined with DATA CAPTURE(CHANGES), the entire before-image and after-image of the data row is logged. Insert index entry The new key value and the data RID. Delete index entry The deleted key value and the data RID.
There are three basic classes of changes to a data page: v Changes to control information. Those changes include pages that map available space and indicators that show that a page has been modified. The COPY utility uses that information when making incremental image copies. v Changes to database pointers. Pointers are used in two situations: The DB2 catalog and directory, but not user databases, contain pointers that connect related rows. Insertion or deletion of a row changes pointers in related data rows. When a row in a user database becomes too long to fit in the available space, it is moved to a new page. An address, called an overflow pointer, that points to the new location is left in the original page. With this technique, index entries and other pointers do not have to be changed. Accessing the row in its original position gives a pointer to the new location.
1118
Administration Guide
v Changes to data. In DB2, a row is confined to a single page. Each row is uniquely identified by a RID containing: The number of the page A 1-byte ID that identifies the row within the page. A single page can contain up to 255 rows; 8 IDs are reused when rows are deleted. The log record identifies the RID, the operation (insert, delete, or update), and the data. Depending on the data size and other variables, DB2 can write a single log record with both undo and redo information, or it can write separate log records for undo and redo.
8. A page in a catalog table space that has links can contain up to 127 rows. Appendix C. Reading log records
1119
Table 287. Contents of checkpoint log records (continued) Type of log record End_Checkpoint Information logged Marks the end of the summary information about a checkpoint.
1120
Administration Guide
0064
Data from last segment of log record 1 0064 Data from log record 2 Data from log record 3 4400 Data from first segment of log
Record 4
FF
100C
0300
048C
Log RBA
00
Timestamp
Log control interval definition (LCID) VSAM record ends here For data sharing, the LRSN of the last log record in this CI Offset of last segment in this CI (beginning of log record 4) Total length of spanned record that ends in this CI (log record 1) Total length of spanned record that begins in this CI (log record 4)
The term log record refers to a logical record, unless the term physical log record is used. A part of a logical record that falls within one physical record is called a segment.
1121
Table 288. Contents of the log record header (continued) Hex offset 02 Length 2 Information Length of any previous record or segment in this CI; 0 if this is the first entry in the CI. The two high-order bits tell the segment type: B'00' A complete log record B'01' The first segment B'11' A middle segment B'10' The last segment Type of log record
1 1
04 06 08 09 0A 10 16 17 18 1E Notes:
2 2 1 1 6 6 1 1 6 8
Resource manager ID (RMID) of the DB2 component that created the log record Flags Unit of recovery ID, if this record relates to a unit of recovery2; otherwise, 0 Log RBA of the previous log record, if this record relates to a unit of recovery2; otherwise, 0 Release identifier Length of header Undo next LSN LRHTIME
1. For record types and subtypes, see Log record type codes on page 1124 and Log record subtype codes on page 1124. 2. For a description of units of recovery, see Unit of recovery log records on page 1116
13
1122
Administration Guide
Each recovery log record consists of two parts: a header, which describes the record, and data. Figure 157 shows the format schematically; the following list describes each field.
Data (maximum 32777) STCK, or LSRN + member ID (8) Undo next LSN (6) Length of header (1) Release identifier (1) LINK (6) Unit of recovery ID (6) Flags (1) Resource manager ID (1) Record subtype (2) Record type (2) Length of previous record or segment (2) Length of this record or segment (2)
The fields are: Length of this record The total length of the record in bytes. Length of previous record The total length of the previous record in bytes. Type The code for the type of recovery log record. See Log record type codes on page 1124.
Subtype Some types of recovery log records are further divided into subtypes. See Log record subtype codes on page 1124. Resource manager ID Identifier of the resource manager that wrote the record into the log. When the log is read, the record can be given for processing to the resource manager that created it. Unit of recovery ID A unit of recovery to which the record is related. Other log records can be related to the same unit of recovery; all of them must be examined to recover the data. The URID is the RBA (relative byte address) of the Begin-UR log record, and indicates the start of that unit of recovery in the log. LINK Chains all records written using their RBAs. For example, the link in an end checkpoint record links the chains back to the begin checkpoint record. Release identifier Identifies in which release the log was written.
Appendix C. Reading log records
1123
Log record header length The total length of the header of the log record. Undo next LSN Identifies the log RBA of the next log record to be undone during backwards (UNDO processing) recovery. STCK, or LRSN+member ID In a non-data-sharing environment, this is a 6-byte store clock value (STCK) reflecting the date and time the record was placed in the output buffer. The last 2 bytes contain zeros. In a data sharing environment, this contains a 6-byte log record sequence number (LRSN) followed by a 2-byte member ID. Data Data associated with the log record. The contents of the data field depend on the type and subtype of the recovery log record.
A single record can contain multiple type codes that are combined. For example, 0600 is a combined UNDO/REDO record; F400 is a combination of four DB2-assigned types plus a REDO.
1124
Administration Guide
Page set open Data set open Page set close Data set close Page set control checkpoint Page set write Page set write I/O Page set reset write Page set status
Subtypes for type 0010 (system event): Code 0001 0002 0003 0004 0005 0006 Type of event Begin checkpoint End checkpoint Begin current status rebuild Begin historic status rebuild Begin active unit of recovery backout Pacing record
Subtypes for type 0020 (unit of recovery control): Code 0001 0002 0004 0008 000C 0010 0020 0040 0081 0084 0088 Type of event Begin unit of recovery Begin commit phase 1 (Prepare) End commit phase 1 (Prepare) Begin commit phase 2 Commit phase 1 to commit phase 2 transition End commit phase 2 Begin abort End abort End undo End todo End redo
Subtypes for type 0100 (checkpoint): Code 0001 0002 Type of event Unit of recovery entry Restart unit of recovery entry
Subtypes for type 2200 (savepoint): Code 0014 000E Type of event Rollback to savepoint Release to savepoint
1125
log records, and page set control log records that you need to interpret data changes by the UR. DSNDQJ00 also explains the content and usage of the log records.
where: v P signifies to start a DB2 performance trace. Any of the DB2 trace types can be used. v CLASS(30) is a user-defined trace class (31 and 32 are also user-defined classes). v IFCID(126) activates DB2 log buffer recording. v DEST(OPX) starts the trace to the next available DB2 online performance (OP) buffer. The size of this OP buffer can be explicitly controlled by the BUFSIZE keyword of the START TRACE command. Valid sizes range from 256 KB to 16 MB. The number must be evenly divisible by 4. When the START TRACE command takes effect, from that point forward until DB2 terminates, DB2 begins writing 4-KB log buffer VSAM control intervals (CIs) to the OP buffer as well as to the active log. As part of the IFI COMMAND invocation, the application specifies an ECB to be posted and a threshold to which the OP buffer is filled when the application is posted to obtain the contents of the buffer. The IFI READA request is issued to obtain OP buffer contents.
IFCID 129 must appear in the IFCID area. To retrieve the log control interval, your program must initialize certain fields in the qualification area:
1126
Administration Guide
WQALLTYP This is a 3-byte field in which you must specify CI (with a trailing blank), which stands for control interval. WQALLMOD In this 1-byte field, you specify whether you want the first log CI of the restarted DB2 subsystem, or whether you want a specific control interval as specified by the value in the RBA field. F P The first option is used to retrieve the first log CI of this DB2 instance. This option ignores any value in WQALLRBA and WQALLNUM. Thepartial option is used to retrieve partial log CIs for the log capture exit which is described in Appendix B, Writing exit routines, on page 1055. DB2 places a value in field IFCAHLRS of the IFI communication area, as follows: v The RBA of the log CI given to the log capture exit routine, if the last CI written to the log was not full. v 0, if the last CI written to the log was full. When you specify option P, DB2 ignores values in WQALLRBA and WQALLNUM. R The read option is used to retrieve a set of up to 7 continuous log CIs. If you choose this option, you must also specify the WQALLRBA and WQALLNUM options, which the following text details.
WQALLRBA In this 8-byte field, you specify the starting log RBA of the control intervals to be returned. This value must end in X'000' to put the address on a valid boundary. This field is ignored when using the WQALLMOD=F option. If you specify an RBA that is not in the active log, reason code 00E60854 is returned in the field IFCARC2, and the RBA of the first CI of the active log is returned in field IFCAFCI of the IFCA. These 6 bytes contain the IFCAFCI field. WQALLNUM In this 2-byte field, specify the number of control intervals you want returned. The valid range is from X'0001' through X'0007', which means that you can request and receive up to seven 4-KB log control intervals. This field is ignored when using the WQALLMOD=F option. For a complete description of the qualification area, see Table 302 on page 1165. If you specify a range of log CIs, but some of those records have not yet been written to the active log, DB2 returns as many log records as possible. You can find the number of CIs returned in field QWT02R1N of the self-defining section of the record. For information about interpreting trace output, see Appendix D, Interpreting DB2 trace output, on page 1139.
1127
v Complete log records are always returned. To use this IFCID, use the same call as described in Reading specific log records (IFCID 0129) on page 1126. IFCID 0306 must appear in the IFCID area. IFCID 0306 returns complete log records and the spanned record indicators in bytes 2 will have no meaning, if present. Multi-segmented control interval log records are combined for a complete log record.
H Retrieves the highest LRSN or log RBA in the active log. The value is returned in field IFCAHLRS of the IFI communications area (IFCA). There is no data returned in the return area and the return code for this call will indicate that no data was returned. N Used following mode F or N calls to request any remaining log records that meet the criteria specified in WQALLCRI. * and any option specified in WQALLOPT. As many log records as fit in the programs return area are returned. T Terminates the log position that was held by any previous F or N request. This allows held resources to be released.
Mode R is not used for IFCID 0306. For both F or N requests, each log record returned contains a record-level feedback area recorded in QW0306L. The number of log records retrieved is in QW0306CT. The ending log RBA or LRSN of the log records to be returned is in QW0306ES.
1128
Administration Guide
WQALLRBA In this 8-byte field, specify the starting log RBA or LRSN of the control records to be returned. For IFCID 0306, this is used on the first option (F) request to request log records beyond the LRSN or RBA specified in this field. Determine the RBA or LRSN value from the H request. For RBAs, the value plus one should be used. For IFCID 0306 with D request of WQALLMOD, the high-order 2 bytes must specify member id and the low order 6 bytes contain the RBA. WQALLCRI In this 1-byte field, indicate what types of log records you want: X'00' Tells DB2 to retrieve only log records for changed data capture and unit of recovery control. X'FF' Tells DB2 to retrieve all types of log records. Use of this option can retrieve large data volumes and degrade DB2 performance. WQALLOPT In this 1-byte field, indicate whether you want the returned log records to be decompressed. X'01' Tells DB2 to decompress the log records before they are returned. X'00' Tells DB2 to leave the log records in the compressed format. A typical sequence of IFCID 0306 calls is: WQALLMOD=H This is only necessary if you want to find the current position in the log. The LRSN or RBA is returned in IFCAHLRS. The return area is not used. WQALLMOD=F The WQALLRBA, WQALLCRI and WQALLOPT should be set. If 00E60812 is returned, you have all the data for this scope. You should wait a while before issuing another WQALLMOD=F call. In data sharing, log buffers are flushed when the F request is issued. WQALLMOD=N If the 00E60812 has not been returned, you issue this call until it is. You should wait a while before issuing another WQALLMOD=F call. WQALLMOD=T This should only be used if you do not want to continue with the WQALLMOD=N before the end is reached. It has no use if a position is not held in the log. IFCID 0306 return area mapping: IFCID 0306 has a unique return area format. The first section is mapped by QW0306OF instead of the write header DSNDQWIN. See Appendix E, Programming for the Instrumentation Facility Interface (IFI), on page 1157 for details.
1129
ARCHIVE
1130
Administration Guide
Table 290. JCL DD statements for DB2 stand-alone log services (continued) JCL DD statement ACTIVEn Explanation (Where n is a number from 1 to 7). Specifies an active log data set that is to be read. Should not be present if the BSDS DD statement is present. If only one data set is to be read, use ACTIVE1 as the ddname. If multiple active data sets are to be read, use DDNAMEs ACTIVE1, ACTIVE2, ... ACTIVEn to specify the data sets. Specify the data sets in ascending log RBA order with ACTIVE1 being the lowest RBA and ACTIVEn being the highest.
DD statements for data sharing users GROUP If you are reading logs from every member of a data sharing group in LRSN sequence, you can use this statement to locate the BSDSs and log data sets needed. You must include the data set name of one BSDS in the statement. DB2 can find the rest of the information from that one BSDS. All members logs and BSDS data sets must be available. If you use this DD statement, you must also use the LRSN and RANGE parameters on the OPEN request. The GROUP DD statement overrides any MxxBSDS statements that are used. (DB2 searches for the BSDS DD statement first, then the GROUP statement, and then the MxxBSDS statements. If you want to use a particular members BSDS for your own processing, you must call that DD statement something other than BSDS.) MxxBSDS Names the BSDS data set of a member whose log must participate in the read operation and whose BSDS is to be used to locate its log data sets. Use a separate MxxBSDS DD statement for each DB2 member. xx can be any two valid characters. Use these statements if logs from selected members of the data sharing group are required and the BSDSs of those members are available. These statements are ignored if you use the GROUP DD statement. For one MxxBSDS statement, you can use either RBA or LRSN values to specify a range. If you use more than one MxxBSDS statement, you must use the LRSN to specify the range. MyyARCHV Names the archive log data sets of a member to be used as input. yy can be any two valid characters that do not duplicate any xx used in an MxxBSDS DD statement. Concatenate all required archived log data sets of a given member in time sequence under one DD statement. Use a separate MyyARCHV DD statement for each member. You must use this statement if the BSDS data set is unavailable or if you want only some of the log data sets from selected members of the group. If you name the BSDS of a member by a MxxBSDS DD statement, do not name the log of the same member by an MyyARCHV statement. If both MyyARCHV and MxxBSDS identify the same log data sets, the service request fails. MyyARCHV statements are ignored if you use the GROUP DD statement.
1131
Table 290. JCL DD statements for DB2 stand-alone log services (continued) JCL DD statement MyyACTn Explanation Names the active log data set of a member to be used as input. yy can be any two valid characters that do not duplicate any xx used in an MxxBSDS DD statement. Use the same characters that identify the MyyARCHV statement for the same member; do not use characters that identify the MyyARCHV statement for any other member. n is a number from 1 to 16. Assign values of n in the same way as for ACTIVEn DD statements. You can use this statement if the BSDS data sets are unavailable or if you want only some of the log data sets from selected members of the group. If you name the BSDS of a member by a MxxBSDS DD statement, do not name the log of the same member by an MyyACTn statement. MyyACTn statements are ignored if you use the GROUP DD statement.
The DD statements must specify the log data sets in ascending order of log RBA (or LRSN) range. If both ARCHIVE and ACTIVEn DD statements are included, the first archive data set must contain the lowest log RBA or LRSN value. If the JCL specifies the data sets in a different order, the job terminates with an error return code with a GET request that tries to access the first record breaking the sequence. If the log ranges of the two data sets overlap, this is not considered an error; instead, the GET function skips over the duplicate data in the second data set and returns the next record. The distinction between out-of-order and overlap is as follows: v Out-of-order condition occurs when the log RBA or LRSN of the first record in a data set is greater than that of the first record in the following data set. v Overlap condition occurs when the out-of-order condition is not met but the log RBA or LRSN of the last record in a data set is greater than that of the first record in the following data set. Gaps within the log range are permitted. A gap is created when one or more log data sets containing part of the range to be processed are not available. This can happen if the data set was not specified in the JCL or is not reflected in the BSDS. When the gap is encountered, an exception return code value is set, and the next complete record after the gap is returned. Normally, the BSDS DD name is supplied in the JCL, rather than a series of ACTIVE DD names or a concatenated set of data sets for the ARCHIVE ddname. This is commonly referred to as running in BSDS mode.
1132
Administration Guide
S4 has two active log data sets. S5 needs only its archive log. S6 needs only one of its active logs. Then you need the following DD statements to specify the required log data sets:
MS1BSDS MS2BSDS MS3ARCHV MS3ACT1 MS4ARCHV MS4ACT1 MS4ACT2 MS5ARCHV MS6ACT1
The stand-alone log services invoke executable macros that can execute only in 24-bit addressing mode and reference data below the 16-MB line. User-written applications should be link-edited as AMODE(24), RMODE(24).
Keyword
Explanation
FUNC=OPEN Requests the stand-alone log OPEN function. LRSN Tells DB2 how to interpret the log range: NO: the log range is specified as RBA values. This is the default. YES: the log range is specified as LRSN values. Specifies the address of an 8-byte area which contains the ddname to be used as an alternate to a ddname of the BSDS when the BSDS is opened, or a register that contains that address.
Appendix C. Reading log records
DDNAME
1133
RANGE
Specifies the address of a 12-byte area containing the log range to be processed by subsequent GET requests against the request block generated by this request, or a register that contains that address. If LRSN=NO, then the range is specified as RBA values. If LRSN=YES, then the range is specified as LRSN values. The first 6 bytes contain the low RBA or LRSN value. The first complete log record with an RBA or LRSN value equal to or greater than this value is the record accessed by the first log GET request against the request block. The last 6 bytes contain the end of the range or high RBA or LRSN value. An end-of-data condition is returned when a GET request tries to access a record with a starting RBA or LRSN value greater than this value. A value of 6 bytes of X'FF' indicates that the log is to be read until either the end of the log (as specified by the BSDS) or the end of the data in the last JCL-specified log data set is encountered. If BSDS, GROUP, or MxxBSDS DD statements are used for locating the log data sets to be read, the RANGE parameter is required. If the JCL determines the log data sets to be read, the RANGE parameter is optional.
PMO
Specifies the processing mode. You can use OPEN to retrieve either log records or control intervals in the same manner. Specify PMO=CI or RECORD, then use GET to return the data you have selected. The default is RECORD. The rules remain the same regarding control intervals and the range specified for the OPEN function. Control intervals must fall within the range specified on the RANGE parameter.
Output GPR 1
Explanation General-purpose register 1 contains the address of a request block on return from this request. This address must be used for subsequent stand-alone log requests. When no more log GET operations are required by the program, this request block should be used by a FUNC=CLOSE request. General-purpose register 15 contains a return code upon completion of a request. For nonzero return codes, a corresponding reason code is contained in register 0. The return codes are listed and explained in Table 291 on page 1133. General-purpose register 0 contains a reason code associated with a nonzero return code in register 15.
GPR 15
GPR 0
See Part 3 of DB2 Codes for reason codes that are issued with the return codes. Log control interval retrieval: You can use the PMO option to retrieve log control intervals from archive log data sets. DSNJSLR also retrieves log control intervals from the active log if the DB2 system is not active. During OPEN, if DSNJSLR detects that the control interval range is not within the archive log range available (for example, the range purged from BSDS), an error condition is returned. Specify CI and use GET to retrieve the control interval you have chosen. The rules remain the same regarding control intervals and the range specified for the OPEN function. Control intervals must fall within the range specified on the RANGE parameter.
1134
Administration Guide
Log control interval format: A field in the last 7 bytes of the control interval, offset 4090, contains a 7-byte timestamp. This field reflects the time at which the control interval was written to the active log data set. The timestamp is in store clock (STCK) format and is the high-order 7 bytes of the 8-byte store clock value.
Explanation Requests the stand-alone log GET function. Specifies a register that contains the address of the request block this request is to use. Although you can specify any register between 1 and 12, using register 1 (RBR=(1)) avoids the generation of an unnecessary load register and is therefore more efficient. The pointer to the request block (that is passed in register n of the RBR=(n) keyword) must be used by subsequent GET and CLOSE function requests.
Output Explanation GPR 15 General-purpose register 15 contains a return code upon completion of a request. For nonzero return codes, a corresponding reason code is contained in register 0. Return codes are listed and explained in Table 291 on page 1133. GPR 0 General-purpose register 0 contains a reason code associated with a nonzero return code in register 15. See Part 3 of DB2 Codes for reason codes that are issued with the return codes.
1135
Reason codes 00D10261 - 00D10268 reflect a damaged log. In each case, the RBA of the record or segment in error is returned in the stand-alone feedback block field (SLRFRBA). A damaged log can impair DB2 restart; special recovery procedures are required for these circumstances. For recovery from these errors, refer to Chapter 22, Recovery scenarios, on page 521. Information about the GET request and its results is returned in the request feedback area, starting at offset X'00'. If there is an error in the length of some record, the control interval length is returned at offset X'0C' and the address of the beginning of the control interval is returned at offset X'08'. On return from this request, the first part of the request block contains the feedback information that this function returns. Mapping macro DSNDSLRF defines the feedback fields which are shown in Table 292. The information returned is status information, a pointer to the log record, the length of the log record, and the 6-byte log RBA value of the record.
Table 292. Stand-alone log get feedback area contents Field name SLRFRC SLRFINFO Hex offset 00 02 Length (bytes) 2 2 Field contents Log request return code Information code returned by dynamic allocation. Refer to the z/OS SPF job management publication for information code descriptions VSAM or dynamic allocation error code, if register 15 contains a nonzero value. VSAM register 15 return code value. Address of area containing the log record or CI Length of the log record or RBA Log RBA of the log record ddname of data set on which activity occurred
04 06 08 0C 0E 14
2 2 4 2 6 8
Keyword FUNC=CLOSE
RBR
Specifies a register that contains the address of the request block that this function uses. Although you can specify any register between 1 and 12, using register 1 (RBR=(1)) avoids the generation of an unnecessary load register and is therefore more efficient. Explanation Register 15 contains a return code upon completion of a request.
Output GPR 15
1136
Administration Guide
For nonzero return codes, a corresponding reason code is contained in register 0. The return codes are listed and explained in Table 291 on page 1133. GPR 0 Register 0 contains a reason code that is associated with a nonzero return code that is contained in register 15. The only reason code used by the CLOSE function is 00D10030.
R0 R1 R2 . . . R15
Figure 158. Excerpts from a sample program using stand-alone log services
1137
1138
Administration Guide
Data section #1
Data section #n
Product section
Data sections
Product section
1139
The writer header section begins at the first byte of the record and continues for a fixed length. (The GTF writer header is longer than the SMF writer header.) The self-defining section follows the writer header section (both GTF and SMF) and is further described in Self-defining section on page 1147. The first self-defining section always points to a special data section called the product section. Among other things, the product section contains an instrumentation facility component identifier (IFCID). Descriptions of the records differ for each IFCID. For a list of records, by IFCID, for each class of a trace, see the description of the START TRACE command in DB2 Command Reference. The product section also contains field QWHSNSDA, which indicates how many self-defining data sections the record contains. You can use this field to keep from trying to access data sections that do not exist. In trying to interpret the trace records, remember that the various keywords you specified when you started the trace determine whether any data is collected. If no data has been collected, field QWHSNSDA shows a data length of zero.
6 A E 12 16 17 18 1C
1140
Administration Guide
Figure 160 is a sample of the first record of the DB2 performance trace output sent to SMF.
000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000120 A 01240000 I J 00980001 C1E3405D C1E4E3C8 B C 0E660030 K 0000002C C3D3C1E2 C9C4404D D 9EEC0093 L M 005D0001 E2404D5C 5C405DC9 E 018FF3F0 N 00550053 405DD9D4 C6C3C9C4 O 01000000 004C0110 00000021 D3E4D5C4 C7C3E2C3 00000000 00000001 F0404040 D5F6F0F2 00000000 F F9F0E2E2 D6D70000 4DE2E3C1 C9C4404D 404D5C40 P Q R 000402xx S E2C1D5E3 A6E9BACB E2E2D6D7 00000000 D9E340E3 5C405DD7 5DC2E4C6 00B3AB78 C16DE3C5 F4570001 40404040 00000000 G H 00000000 0000008C D9C1C3C5 404DE2E3 D3C1D540 4D5C405D E2C9E9C5 404D5C40 E2E2D6D7 A6E9BACB D9C5E2C1 004C0200 40404040 00000000 6DD3C1C2 E2E8E2D6 40404040 00000000
5D000000 01000101 F6485E02 00000003 C4C2F2D5 C5E34040 D7D94040 F0F2F34B E2E8E2D6 D7D94040 00000000T
Figure 160. DB2 trace output sent to SMF (printed with DFSERA10 print program of IMS) Key to Figure 160 A 0124 B 66 C 0030 9EEC D 0093 018F E F3F0 F9F0 F E2E2 D6D7 G H 0000008C I 0098 J 0001 K 0000002C L 005D M 0001 N 00550053 O P 0004 Q 02 R xx S E2C1D5E3... T Description Record length (field SM102LEN); beginning of SMF writer header section Record type (field SM102RTY) Time (field SM102TME) Date (field SM102DTE) System ID (field SM102SID) Subsystem ID (field SM102SSI) End of SMF writer header section Offset to product section; beginning of self-defining section Length of product section Number of times the product section is repeated Offset to first (in this case, only) data section Length of data section Number of times the data section is repeated Beginning of data section Beginning of product section IFCID (field QWHSIID) Number of self-defining sections in the record (field QWHSNSDA) Release indicator number (field QWHSRN); this varies according to the actual level of DB2 you are using. Local location name (16 bytes) End of first record
1141
Table 294. Contents of GTF writer header section Offset 0 2 4 5 6 14 16 20 28 28 30 QWGTAID QWGTFID QWGTTIME QWGTEID QWGTASCB QWGTJOBN QWGTHDRE QWGTDLEN QWGTDSCC Macro DSNDQWGT field Description QWGTLEN Length of Record Reserved Application identifier Format ID Timestamp; you must specify TIME=YES when you start GTF. Event ID: XEFB9 ASCB address Job name Extension to header Length of data section Segment control code 0=Complete 2=Last 1=First 3=Middle 31 32 36 40 QWGTDZZ2 QWGTSSID QWGTWSEQ QWGTEND Reserved Subsystem ID Sequence number End of GTF header
1142
Administration Guide
DFSERA10 - PRINT PROGRAM 000000 000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000000 000020 000040 000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000000 000020 000040 . . . 000000 000020 000040 000060 00780000 FF00A6E9 E2E2D6D7 00000002 29469A03 0000000E 40404040 40404040 C33E2957 D103EFB9 AA 00000000 004C011A 00000002 00000001 40404040 40404040 00F91400 E2E2D6D7 00010D31 02523038 E2C1D5E3 C16DE3C5 A6E9B6B4 9A2B0001 Z D4E2E3D9 005C0200 E2E2D6D7 A6E9C33E D9C5E2C1 6DD3C1C2 001A0000 A 011C0000 G E2E2D6D7 D9E340E3 5C405DC4 405DC9C6 P 004C0110 0001FFFF B FF00A6E9 H 00000001 D9C1C3C5 C5E2E340 C3C9C440 Q R S 000402xx T 00000001 E2C1D5E3 F0404040 A6E9C33E D5F6F0F2 E2E2D6D7 00440000 E2E2D6D7 00000000 W 011C0000 E2E2D6D7 00000288 000003D8 0000068C 59F48900 001F001F 00F90480 E2D4C640 011C0000 E2E2D6D7 00000000 E2D9E540 00000156 00000000 00000000 00000000 00000000 011C0000 E2E2D6D7 00000000 00000000 00000000 00010000 0000000D 00000000 00050000 FF00A6E9 00000001 V FF00A6E9 00000002 0018000E 00800001 004C0001 001E001E 00F90E00 C9D9D3D4 00000046 FF00A6E9 00000002 C7E3C640 00000000 000000D2 00000000 00000000 00000000 00000000 FF00A6E9 00000002 00000000 00000000 D6D7F840 0000000E 00000000 00000000 00000005 94B6A6E9 BD6636FA C C33E28F7 DD03EFB9 I J K 000000A0 00980001 404DE2E3 C1E3405D 4DC7E3C6 405DD7D3 4D5C405D C2E4C6E2 00B3ADB8 E2E2D6D7 C16DE3C5 D9C5E2C1 271F0001 004C0200 40404040 40404040 C33E2901 1303EFB9 00000000 00000000 C33E2948 000006D8 00000590 00000458 000004C4 00F91400 C4C9E2E3 00000000 00000046 C33E294B D9C5E240 00000001 00000000 00000036 00000000 00000000 00000000 D6D7F440 C33E294D 00000000 00000000 D6D7F740 00000000 0000000D 00000000 00040000 00000005 E203EFB9 004C0001 00400001 00280001 00200001 C4C2D4F1 00000000 0629E2BC 00000000 1603EFB9 00000000 00000001 00000000 00000036 00000000 00000000 D6D7F340 00000000 3C03EFB9 00000000 D6D7F640 00000000 00000000 00000000 00000000 00000006 00000000 5C021000 00010000 D 00F91400 E2E2D6D7 L M N 00000038 00680001 C3D3C1E2 E2404D5C C1D5404D 5C405DC1 C9E9C540 4D5C405D A6E9C33E 28EF4403 6DD3C1C2 C4C2F2D5 E2E8E2D6 D7D94040 40404040 E2E8E2D6 00F91400 E2E2D6D7 00000000 00000000 00F91400 00000090 000005D0 00000644 D4E2E3D9 00000001 3413C60E 00000000 00000000 00F91400 00000000 00000000 00000000 00000000 00000000 D6D7F240 00000000 00000000 00F91400 D6D7F540 00000000 00000000 00000000 00000000 00030000 00000006 00000000 E2E2D6D7 001C0004 00740001 00480001 00000001 1A789573 00000000 145CE000 00000000 E2E2D6D7 00000000 00000000 00000000 00000004 D6D7F140 00000000 00000000 00000000 E2E2D6D7 00000000 00000000 00000000 00000000 00000000 00000003 00000000 00000000 0000 D4E2E3D9 O 0060005E 405DD9D4 E4E3C8C9 FFFFFFFF E F 01000100 4DE2E3C1 C9C4404D C4404D5C 00040101
00000006 00000001 C5E34040 D3E4D5C4 F0F2F34B C7C3E2C3 D7D94040 U D4E2E3D9 00280200 00000000 00000000 D4E2E3D9 00000100 00000480 000004E4 762236F2 00000000 1C4D0A00 001D001D 00000000 D4E2E3D9 00000000 00000000 00000000 E2D9F240 00000000 00000000 00000000 00000000 D4E2E3D9 00000000 00000000 00000000 00000000 00020000 00000003 00000000 006A0000 X 01000100 001C000E 00440001 00AC0001 00000000 95826100 00220022 00F91600 Y 01000300 00000000 00000000 E2D9F140 00000000 00000000 00000000 00000000 Y 01000300 00000000 00000000 00000000 00000000 0000000D 00000000 00000000
Figure 161. DB2 trace output sent to GTF (spanned records printed with DFSERA10 print program of IMS) Key to Figure 161 A 011C B A6E9 C33E28F7 DD03 C EFB9 D E2E2D6D7 D4E2E3D9 E 0100 F 01 Description Record length (field QWGTLEN); beginning of GTF writer header section Timestamp (field QWGTTIME) Event ID (field QWGTEID) Job name (field QWGTJOBN) Length of data section Segment control code (01 = first segment of the first record)
1143
Key to Figure 161 on page 1143 G E2E2D6D7 H I 000000A0 J 0098 K 0001 L 00000038 M 0068 N 0001 O 0060005E P 004C0110... Q 0004 R 02 S xx T E2C1D5E3... U 02 V W X 01 Y 03 Z 02 AA 004C
Description Subsystem ID (field QWGTSSID) End of GTF writer header section Offset to product section; beginning of self-defining section Length of product section Number of times the product section is repeated Offset to first (in this case, only) data section Length of data section Number of times the data section is repeated Beginning of data section Beginning of product section IFCID (field QWHSIID) Number of self-defining sections in the record (field QWHSNSDA) Release indicator number (field QWHSRN); this varies according to the actual release level of DB2 you are using. Local location name (16 bytes) Last segment of the first record End of first record Beginning of GTF header for new record First segment of a spanned record (QWGTDSCC = QWGTDS01) Middle segment of a spanned record (QWGTDSCC = QWGTDS03) Last segment of a spanned record (QWGTDSCC = QWGTDS02) Beginning of product section
GTF records are blocked to 256 bytes. Because some of the trace records exceed the GTF limit of 256 bytes, they have been blocked by DB2. Use the following logic to process GTF records: 1. Is the GTF event ID of the record equal to the DB2 ID (that is, does QWGTEID = X'xFB9')? If it is not equal, get another record. If it is equal, continue processing. 2. Is the record spanned? If it is spanned (that is, QWGTDSCC = QWGTDS00), test to determine whether it is the first, middle, or last segment of the spanned record. a. If it is the first segment (that is, QWGTDSCC = QWGTDS01), save the entire record including the sequence number (QWGTWSEQ) and the subsystem ID (QWGTSSID). b. If it is a middle segment (that is, QWGTDSCC = QWGTDS03), find the first segment matching the sequence number (QWGTSEQ) and on the subsystem ID (QWTGSSID). Then move the data portion immediately after the GTF header to the end of the previous segment.
1144
Administration Guide
c. If it is the last segment (that is, QWGTDSCC = QWGTDS02), find the first segment matching the sequence number (QWGTSEQ) and on the subsystem ID (QWTGSSID). Then move the data portion immediately after the GTF header to the end of the previous record. Now process the completed record. If it is not spanned, process the record. Figure 162 on page 1146 shows the same output after it has been processed by a user-written routine, which follows the logic that was outlined previously.
1145
000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000120 000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000120 000140 000160 000180 0001A0 0001C0 0001E0 000200 000220 000240 000260 000280 0002A0 0002C0 0002E0 000300 000320 000340 000360 000380 0003A0 0003C0 0003E0 000400 000420 000440 000460 000480 0004A0 0004C0 0004E0 000500 000520 000540 000560 000580 0005A0 0005C0 0005E0 000600
01380000 E2E2D6D7 D9E340E3 5C405DC4 405DC9C6 004C0110 00000001 F0404040 D5F6F0F2 00000000 A 07240000
FF00A6E9 00000019 D9C1C3C5 C5E2E340 C3C9C440 000402xx E2C1D5E3 A6E9DCA7 E2E2D6D7 00000000
DCA7E275 000000A0 404DE2E3 4DC7E3C6 4D5C405D 00B3ADB8 C16DE3C5 DF960001 40404040 00000000
FF00A6E9 C E2E2D6D7 0000001A 00000288 0018000E 000003D8 00800001 0000068C AB000300 001F001F 00F90480 E2D4C640 00000000 00000019 00000000 00000036 00000000 20000004 D6D7F340 00000000 00000000 00000000 00000000 00000000 00020000 00000003 00000000 006A0000 00000000 00000000 008F0000 00000000 00000000 00CA0000 00000000 00000000 00000000 00000000 0000000D 00000001 00000001 00000000 00000000 00000000 00000001 00000002 00000000 00000000 00000000 00000000 00000000 00000000 004C0001 001E001E 00F90E00 C9D9D3D4 0000004D 00000000 00000000 00000000 00000000 0000000C D6D7F240 00000000 00000000 00000000 00000000 00000000 00000000 00000041 00000000 00000000 0000000C 00000000 00000000 00000000 00000000 00000000 00000041 00000000 00000000 00000000 00000000 0000000A 0000000C 00000000 00000000 E2C1D56D 000004A8 00000000 00000001 00000000 00000000 00000000 00000003 00000000 00000000
1204EFB9 00980001 C1E3405D 405DD7D3 C2E4C6E2 E2E2D6D7 D9C5E2C1 004C0200 40404040 00000000 B DCA8060C 2803EFB9 D 000006D8 004C0001 00000590 00400001 00000458 00280001 F 000004C4 00200001 00F91400 C4C2D4F1 C4C9E2E3 00000000 00000000 07165F79 0000004D 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000004 E2D9F240 D6D7F140 00000002 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 D6D7F840 00010000 00000042 00000011 00000030 00000000 00000000 00050000 0000000B 0000000B 00000001 00000000 00000000 008E0000 00000000 00000000 00000000 00000000 00000000 00920000 00000000 00000011 00000030 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000004 00000029 00000009 00000000 04A29740 00000000 00000000 00000000 00000000 D1D6E2C5 40404040 000005C7 00000000 00000001 00000000 00000000 00000000 00000000 00000000 00000002 00000000 00000000 00000000 00000000 00000000 0000000C 00000001 00000000 00000001
00F91400 00000038 C3D3C1E2 C1D5404D C9E9C540 0093018F 6DD3C1C2 E2E8E2D6 40404040 00000000 00F91400 E 00000090 000005D0 00000644 D4E2E3D9 00000001 4928674B 00000000 00000000 00000000 00000000 E2D9F140 00000092 00000001 00000000 00000000 00000000 00000000 D6D7F740 00000000 00000011 00000000 00040000 0000000A 00000000 008D0000 00000000 00000000 00910000 00000000 00000000 00000000 00000000 00000000 00000000 000000C3 00000000 00000000 00000000 40404040 00000001 00000000 00000000 00000000 00000003 00000005 00000000 00000000 00000000
E2E2D6D7 00680001 E2404D5C 5C405DC1 4D5C405D 11223310 C4C2F2D5 D7D94040 E2E8E2D6 00000000 E2E2D6D7 001C0004 00740001 00480001 00000003 1DE8AEE2 00000000 3C2EF500 00000000 00000000 E2D9E540 00000156 00000001 00000001 00000000 00000000 00000000 D6D7F640 00000000 00000000 00000030 00000000 0000000C 00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000003 00000000 00000000 00000000 00000000 00000003 00000000 00000007 00000000
D4E2E3D9 07080000 00000100 001C000E 00000480 00440001 000004E4 00AC0001 27BCFDBC 00000000 217F6000 001D001D 00000000 C7E3C640 00000000 000000D2 00000091 00000000 00000000 00000000 D6D7F540 00000000 00000000 00000000 00000000 00030000 0000000C 00000000 008C0000 00000000 00000000 00900000 00000000 00000000 00000000 00000000 00000000 00000000 000005D4 00000000 00000001 00000000 00000000 00000002 00000003 00000000 00000000 00000000 00000003 00000000 00000000 00000000 00000000 00000000 DB0CB200 00220022 00F91600 D9C5E240 00000019 00000000 00000036 00000091 00010000 00000000 D6D7F440 00000000 00000000 00000000 00000000 00000000 00000003 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000130 00000000 00000000 00000000 00000000 00000003 00000000 00000000 00000000 00000000 00000006 00000000 00000000 00000000 00000000
Figure 162. DB2 trace output sent to GTF (assembled with a user-written routine and printed with DFSERA10 print program of IMS) (Part 1 of 2)
1146
Administration Guide
Figure 162. DB2 trace output sent to GTF (assembled with a user-written routine and printed with DFSERA10 print program of IMS) (Part 2 of 2) Key to Figure 162 on page 1146 A 0724 B C D E F G H EFB9 000006D8 00000090 000004C4 004C011A Description Length of assembled record; beginning of GTF writer header section of second record (field QWGTLEN) GTF event ID (field QWGTEID) End of GTF writer header section of second record Offset to product section Offset to first data section Offset to last data section Beginning of product section End of second record
Self-defining section
The self-defining section following the writer header contains pointers that enable you to find the product and data sections, which contain the actual trace data. Each pointer is a descriptor that contains three fields, which are: 1. A fullword that contains the offset from the beginning of the record to the data section. 2. A halfword that contains the length of each item in the data section. If this value is zero, the length of the items are varied. 3. A halfword that contains the number of times that the data section is repeated. If the field contains 0, the data section is not in the record. If it contains a number greater than 1, multiple data items are stored contiguously within that data section. To find the second data item, add the length of the first data item to the address of the first data item (and so forth). Pointers occur in a fixed order, and their meanings are determined by the IFCID of the record. Different sets of pointers can occur, and each set is described by a separate DSECT. Therefore, to examine the pointers, you must first establish addressability by using the DSECT that provides the appropriate description of the self-defining section. To do this, perform the following steps: 1. Compute the address of the self-defining section. The self-defining section begins at label SM100END for statistics records, SM101END for accounting records, and SM102END for performance and audit records. It does not matter which mapping DSECT you use because the length of the SMF writer header is always the same. For GTF, use QWGTEND. 2. Determine the IFCID of the record. Use the first field in the self-defining section; it contains the offset from the beginning of the record to the product section. The product section contains the IFCID.
Appendix D. Interpreting DB2 trace output
| |
1147
The product section is mapped by DSNDQWHS; the IFCID is mapped by QWHSIID. For statistics records that have IFCID 0001, establish addressability using label QWS0; for statistics records having IFCID 0002, establish addressability using label QWS1. For accounting records, establish addressability using label QWA0. For performance and audit records, establish addressability using label QWT0. After establishing addressability using the appropriate DSECT, use the pointers in the self-defining section to locate the records data sections.
Data section #1
Data section #2
Item #1
Item #2
Item #n
Self-defining section
Figure 163. Relationship between self-defining section and data sections for same-length data items
| | | | | | |
1148
Administration Guide
Pointer to data section #n Offset from start of the record to data section #n Value of "O" to Number of items indicate variable (m) in data -length data items section #n
Data item length indicated in 2 bytes that precede each data item
Data section #1
Data section #2
Item #1
Item #2
Item #3
Item #n
Self-defining section
| | |
Figure 164. Relationship between self-defining section and data sections for variable-length data items
Product section
The product section for all record types contains the standard header. The other headers (correlation, CPU, distributed, and data sharing data) might also be present. Table 295 shows the contents of the product section standard header.
Table 295. Contents of product section standard header Hex Offset 0 2 3 4 6 6 7 8 C 10 18 1C 20 24 34 34 3C 44 Macro DSNDQWHS field Description QWHSLEN QWHSTYP QWHSRMID QWHSIID QWHSRELN QWHSNSDA QWHSRN QWHSACE QWHSSSID QWHSSTCK QWHSISEQ QWHSWSEQ QWHSMTN QWHSLOCN QWHSLWID QWHSNID QWHSLUNM QWHSLUUV Length of standard header Header type RMID IFCID Release number section Number of self-defining sections DB2 release identifier ACE address Subsystem ID TimestampSTORE CLOCK value assigned by DB2 IFCID sequence number Destination sequence number Active trace number mask Local location Name Logical unit of work ID Network ID LU name Uniqueness value
1149
Table 295. Contents of product section standard header (continued) Hex Offset 4A Macro DSNDQWHS field Description QWHSLUCC QWHSFLAG QWHS_UNICODE QWHSLOCN_Off Commit count Flags %U fields contain Unicode. If QWHSLOCN is truncated, this is the offset from the beginning of QWHS to QWHSLOCN_LEN. If the value is zero, refer to QWHSLOCN. This field contains both QWHSLOCN_Len and QWHSLOCN_Var. This element is only present if QWHSLOCN_Off is greater than 0. The length of the following field. This element is only present if QWHSLOCN_Off is greater than 0. The local location name. This element is only present if QWHSLOCN_Off is greater than 0. The sub-version for the base release. End of product section standard header
# # #
4C 4D 4E
# #
# # # # #
Table 296 shows the contents of the product section correlation header.
Table 296. Contents of product section correlation header Hex Offset 0 2 3 4 C 18 20 28 30 34 4A 4C 5C 7C QWHCEUID QWHCEUTX QWHCEUWN QWHCAID_Off QWHCAID QWHCCV QWHCCN QWHCPLAN QWHCOPID QWHCATYP QWHCTOKN Macro DSNDQWHC field QWHCLEN QWHCTYP Description Length of correlation header Header type Reserved Authorization ID Correlation ID Connection name Plan name Original operator ID The type of system that is connecting Trace accounting token field Reserved User ID of at the workstation for the end user Transaction name for the end user Workstation name for the end user If QWHCAID is truncated, this is the offset from the beginning of QWHC to QWHCAID_LEN. If the value is zero, refer to QWHCAID.
8E
1150
Administration Guide
Table 296. Contents of product section correlation header (continued) Hex Offset Macro DSNDQWHC field QWHCAID_D Description This field contains both QWHCAID_Len and QWHCAID_Var. This element is only present if QWHCAID_Off is greater than 0. The length of the field. This element is only present if QWHCAID_Off is greater than 0. The authorization ID. This element is only present if QWHCAID_Off is greater than 0. If QWHCOPID is truncated, this is the offset from the beginning of QWHC to QWHCAID_LEN. If the value is zero, refer to QWHCOPID. This field contains both QWHCOPID_Len and QWHCOPID_Var. This element is only present if QWHCOPID_Off is greater than 0. The length of the field. This element is only present if QWHCOPID_Off is greater than 0. The original operator ID. This element is only present if QWHCOPID_Off is greater than 0. If QWHCEUID is truncated, this is the offset from the beginning of QWHC to QWHCEUID_LEN. If the value is zero, refer to QWHCEUID. Trusted context and role data is present if an agent running under a trusted context writes the record and the trusted context data can be accessed. This field contains both QWHCEUID_Len and QWHCEUID_Var. This element is only present if QWHCOPID_Off is greater than 0. Length of the field. This element is only present if QWHCEUID_Off is greater than 0. End users USERID. This element is only present if QWHCEUID_Off is greater than 0. End of product section correlation header.
# # # # # # #
# # # # # # #
QWHCOPID_D
QWHCOPID_Len QWHCOPID_Var
QWHCEUID_Off
# # # # # #
QWHCEUID_D
1151
34
# #
Defined by QWHDRGNM_Off
QWHDRQNM_D
# # # # #
QWHDRQNM_Len
QWHDRQNM_Var
QWHDSVNM_Off
# #
Defined by QWHDSVNM_Off
QWHDSVNM_ DQWHDSVNM_Off
# # # #
QWHDSVNM_Len
QWHDSVNM_Var
38
QWHDEND
1152
Administration Guide
Table 299. Contents of trace header (continued) Hex Offset 4 6 7 8 C E 10 14 18 1C 20 24 26 28 2C 2E 30 Macro DSNDQWHT field Description QWHTTID QWHTTAG QWHTFUNC QWHTEB QWHTPASI QWHTR14A QWHTR14 QWHTR15 QWHTR0 QWHTR1 QWHTEXU QWHTDIM QWHTHASI QWHTDATA QWHTFLAG QWHTDATL QWHTEND Event ID ID specified on DSNWTRC macro Resource manager function code. Default is 0. Execution block address Prior address space ID - EPAR Register 14 address space ID Contents of register 14 Contents of register 15 Contents of register 0 Contents of register 1 Address of MVS execution unit Number of data items Home address space ID Address of the data Flags in the trace list Length of the data list End of header
The following sample shows an accounting trace for a distributed transaction sent to SMF (printed with the IMS DFSERA10 print program). In this example, one accounting record (IFCID 0003) is from the server site (SILICON_VALLEY_LAB). DSNDQWA0 maps the self-defining section for IFCID 0003.
+0000 08760000 5E65007E 75660108 B C D E F +0020 011E0001 00000084 02100001 J +0040 00580001 00000308 00000001 N +0060 00000000 00000000 00000000 +0080 00740001 C3268DC4 F03E0E13 +00A0 030237A0 00000000 00000000 +00C0 00000000 00000000 00000001 +00E0 00000000 00000000 00000000 289FF3F0 G 0000054C K 000003F2 00000000 C3268DC6 00000000 00000001 14293057 A F9F0E5F8 F1C20000 00000000 00000758 H I 01CC0001 00000718 00400001 000004F4 L M 01020001 00000000 00000000 00000000 00000000 1AB305D3 00000000 00000001 00000000 00000000 00000000 0000000C 0143BA08 00000000 00000000 0073BAA0 40404040 00000000 0000001A 00000294 00000000 40404040 025638C0 00000022
1153
+0100 +0120 +0140 +0160 +0180 +01A0 +01C0 +01E0 +0200 +0220 +0240 +0260 +0280 +02A0 +02C0 +02E0 +0300 +0320 +0340 +0360 +0380 +03A0 +03C0 +03E0 +0400 +0420 +0440 +0460 +0480 +04A0 +04C0 +04E0 +0500 +0520 +0540 +0560 +0580 +05A0 +05C0 +05E0 +0600 +0620 +0640 +0660 +0680 +06A0 +06C0 +06E0 +0700 +0720 +0740 +0760 +0780 +07A0 +07C0 +07E0 +0800
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 86C8FC85 00000000 00080000 000A0000 00000000 00000000 00010000 00000000 D5F0F8F0 F1404040 404040C2 E2D5E3C5 40404040 40404040 40404040 40404040
00000000 00000000 00000000 003F0001 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000008 00000000 00040000 00000000 00000000 00060000 00000000 00000000 F1F00000 40404040 C1E3C3C8 D7F340E4 40404040 40404040 40404040 40404040
00000000 00000000 00000000 00000000 00000000 000017CF 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0E44EB81 00000004 O 00E8E2E3 00004040 00010000 00000000 00000000 00010000 00000000 00000000 404040E4 404040E3 E2C5D97E 40404040 40404040 40404040 40404040
00000000 00000002 00000000 00000000 00000000 C1D3D3E3 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000008 D3C5C3F1 49B70000 00000000 00000000 00000000 00000000 00000000 00000000 E2C9C2D4 C5D7F440 E2E8E2C1 40404040 40404040 40404040 40404040
00000000 00000000 00000000 00000000 00000000 E2D64040 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00492740 00000002 40404040 05CD0000 00000000 00000000 00000000 00000000 00000000 P 00005FC4 E2E840E2 40404040 C4D44040 40404040 40404040 40404040 40404040
0534D881 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 40404040 00000000 00000000 00000000 00008000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 02EF0576 00000000 40400000 00010000 00000000 00000000 00000000 00000000 00010000 F0F1F0E2 C4C2F2C2 E8E2C1C4 40404040 40404040 40404040 40404040 40404040
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 000A0000 00000000 00000000 00000000 00010000 0001C4E2 E3D3C5C3 C1E3C3C8 D44040C4 40404040 40404040 40404040 40404040 40404040
40404040 40404040 40404040 40404040 00000000 00000000 00000003 00000000 00000000 00000000 00000022 0000001A R 00000000 00000000 00000000 209501CC 00000000 00000001 00000001 00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000012 00000002 00000000 00000000 00000000 00000000 165DE720 E5F8F1C2 C3268DC6 1BF2201E C3F1C240 40404040 40404040 E4E2C9C2 U 99360003 00000000 00020094 0200E2E8 4040C2C1 E3C3C840 4040C4E2 D5E3C5D7 C9C2D4E2 E84BE2E8 C5C3F1C4 C2F2C326 40404040 40404040 40404040 40404040
E2D5F0F8 E8C5C3F1 404040E2 40404040 40404040 40404040 40404040 40404040 Q 40400000 00000000 00000000 00000000 00000000 00000001 D8E7E2E3 00000000 00000000 00000005 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000008 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 S 00000000 00000000 T 0052011A 00000004 F1C4C2F2 D7F44040 40400000 40404040 40404040 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0000002B 00000000 00030D81 E2E3D3C5 C3268DC2 40404040 0008E4E2 40404040 40404040 V
00000000 00000000 00000000 00000000 00000000 00000000 00000001 00000009 D4E2E840 E2E8C5C3 E2C1C4D4 F340E2E8 8DC29936 40404040 4040E3C5 E2C1C4D4 00004040 40404040
1154
Administration Guide
40404040 40404040 40404040 40404040 1000E2E3 D3C5C3F1 40404040 40404040 40404040 40404040 4040C4E2 D5F0F8F0
40404040 40404040 00000000 00000038 4040C326 8DC607B4 F7D0E2E3 D3C5C3F1 F1F00000 0000
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Key to DB2 distributed data trace output sent to SMF A 00000758 B C D E F G H I J K L M N O P 011E 0001 00000084 0210 0001 0000054C 00000718 000004F4 00000308 000003F2 00000000 00000000 00000000 00E8E2E3 00005FC4
Description Offset to product section; beginning of self-defining section Length of product section Number of times product section is repeated Offset to accounting section Length of accounting section Number of times accounting section is repeated Offset to SQL accounting section Offset to buffer manager accounting section Offset to locking accounting section Offset to distributed section Offset to MVS/DDF accounting section Offset to IFI accounting section Offset to package/DBRM accounting section Beginning of accounting section (DSNDQWAC) Beginning of distributed section (DSNDQLAC) Beginning of MVS/DDF accounting section (DSNDQMDA) Beginning of locking accounting section (DSNDQTXA) Beginning of SQL accounting section (DSNDQXST) Beginning of buffer manager accounting section (DSNDQBAC) Beginning of product section (DSNDQWHS); beginning of standard header Beginning of correlation header (DSNDQWHC) Beginning of distributed header (DSNDQWHD)
1155
1156
Administration Guide
1157
v Activate and deactivate predefined trace classes and trace records (identified by IFCIDs) restricting tracing to a set of DB2 identifiers (plan name, authorization ID, resource manager identifier (RMID), and so on).
IFI functions
A monitor program can use the following IFI functions: COMMAND READS To submit DB2 commands. For more information, see COMMAND: Syntax and usage with IFI on page 1160. To obtain monitor trace records synchronously. The READS request
1158
Administration Guide
causes those records to be returned immediately to the monitor program. For more information, see READS: Syntax and usage with IFI on page 1162. READA To obtain trace records of any trace type asynchronously. DB2 records trace events as they occur and places that information into a buffer; a READA request moves the buffered data to the monitor program. For more information, see READA: Syntax and usage with IFI on page 1177. To write information to a DB2 trace destination that was previously activated by a START TRACE command. For more information, see WRITE: Syntax and usage with IFI on page 1179.
WRITE
The parameters that are passed on the call indicate the desired function (as described in IFI functions on page 1158), point to communication areas used by the function, and provide other information that depends on the function specified. Because the parameter list may vary in length, the high-order bit of the last parameter must be on to signal that it is the last parameter in the list. Example: To turn on the bit in assembler, use the VL option to signal a variable length parameter list. The communication areas that are used by IFI are described in Common communication areas for IFI calls on page 1180. After you insert this call in your monitor program, you must link-edit the program with the correct language interface. Each of the following language interface modules has an entry point of DSNWLI for IFI: v CAF DSNALI v TSO DSNELI v CICS DSNCLI v IMS DFSLI000 v RRSAF DSNRLI CAF DSNALI, the CAF (call attachment facility) language interface module, includes a second entry point of DSNWLI2. The monitor program that link-edits DSNALI with the program can make IFI calls directly to DSNWLI. The monitor program that loads DSNALI must also load DSNWLI2 and remember its address. When the monitor program calls DSNWLI, the program must have a dummy entry point to handle the call to DSNWLI and then call the real DSNWLI2 routine. See Part 6 of DB2 Application Programming and SQL Guide for additional information about using CAF. Considerations for writing a monitor program: A monitor program issuing IFI requests must be connected to DB2 at the thread level. If the program contains SQL statements, you must precompile the program and create a DB2 plan using the BIND process. If the monitor program does not contain any SQL statements, it
Appendix E. Programming for the Instrumentation Facility Interface (IFI)
1159
does not have to be precompiled. However, as is the case in all the attachment environments, even though an IFI only program (one with no SQL statements) does not have a plan of its own, it can use any plan to get the thread level connection to DB2. The monitor program can run in either 24- or 31-bit mode. Monitor trace classes: Monitor trace classes 2 through 8 can be used to collect information related to DB2 resource usage. Use monitor trace class 5, for example, to find out how much time is spent processing IFI requests. Monitor trace classes 2, 3, and 5 are identical to accounting trace classes 2, 3, and 5. For more information about these traces, see Monitor trace on page 1198. Monitor authorization: On the first READA or READS call from a user, an authorization is checked to determine if the primary authorization ID or one of the secondary authorization IDs of the plan executor has MONITOR1 or MONITOR2 privilege. If your installation uses the access control authorization exit routine, that exit routine might control the privileges that can use the monitor trace. If you have an authorization failure, an audit trace (class 1) record is generated that contains the return and reason codes from the exit. This is included in IFCID 0140. See Access control authorization exit routine on page 1065for more information on the access control authorization exit routine.
1160
Administration Guide
ifca IFCA (instrumentation facility communication area) is an area of storage that contains the return code and reason code. IFCA indicates the following information: v The success or failure of the request v Diagnostic information from the DB2 component that executed the command v The number of bytes moved to the return area v The number of bytes of the message segments that did not fit in the return area Some commands might return valid information despite a non-zero return code or reason code. For example, the DISPLAY DATABASE command might indicate that more information could have been returned than was allowed. If multiple errors occur, the last error is returned to the caller. For example, if the command is in error and the error message does not fit in the area, the error return code and reason code indicate that the return area is too small. If a monitor program issues START TRACE, the ownership token (IFCAOWNR) in the IFCA determines the owner of the asynchronous buffer. The owner of the buffer is the only process that can obtain data through a subsequent READA request. See Instrument facility communications area (IFCA) on page 1180 for a description of the IFCA. return-area When the issued command finishes processing, it places messages (if any) in the return area. The messages are stored as varying-length records, and the total number of bytes in the records is placed in the IFCABM (bytes moved) field of the IFCA. If the return area is too small, as many message records as will fit are placed into the return area. The monitor program should analyze messages that are returned by the command function. See Return area on page 1184 for a description of the return area. output-area Contains the varying-length command. See Output area on page 1185 for a description of the output area. buffer-info This parameter is required for starting traces to an OP buffer. Otherwise, it is not needed. This parameter is used only on COMMAND requests. It points to an area that contains information about processing options when a trace is started by an IFI call to an unassigned OPn destination buffer. An OPn destination buffer is considered unassigned if it is not owned by a monitor program. If the OPn destination buffer is assigned, then the buffer information area is not used on a later START or MODIFY TRACE command to that OPn destination. For more information about using OPn buffers, see Usage notes for READA requests through IFI on page 1177. When you use buffer-info on START TRACE, you can specify the number of bytes that can be buffered before the monitor program ECB is posted. The ECB is posted when the amount of trace data collected has reached the value that is specified in the byte count field. The byte count field is also specified in the buffer information area. Table 301 on page 1162 summarizes the fields in the buffer information area.
1161
Table 301. Buffer information area fields. This area is mapped by assembler mapping macro DSNDWBUF. Name WBUFLEN Hex offset 0 2 WBUFEYE WBUFECB 4 8 Data type Signed two-byte integer Signed two-byte integer Description Length of the buffer information area, plus 4. A zero indicates the area does not exist. Reserved.
Character, 4 bytes Eye catcher for block, WBUF. Address The ECB address to post when the buffer has reached the byte count specification (WBUFBC). The ECB must reside in monitor key storage. A zero indicates not to post the monitor program. In this case, the monitor program should use its own timer to determine when to issue a READA request.
WBUFBC
The records placed into the instrumentation facility must reach this value before the ECB will be posted. If the number is zero, and an ECB exists, posting occurs when the buffer is full.
1162
Administration Guide
# # #
READS requests through IFI on page 1173. IFCID 0124, 0129, 0147, 148, 149, 0150, 0199, 234, 0254, 0316, and 0317 can be obtained only through the IFI READS interface. The program that issues the READS request does not need to start monitor class 1 because no ownership of an OP buffer is involved when you obtain data from the READS interface. Data is written directly to the application program's return area, bypassing the OP buffer. This bypass is in direct contrast to the READA interface where the application that issues READA must first issue a START TRACE command to obtain ownership of an OP buffer and start the appropriate traces.
ifca Contains information about the success of the call. See Instrument facility communications area (IFCA) on page 1180 for a description of the IFCA. return-area Contains the varying-length records that are returned by the instrumentation facility. IFI monitor programs might require large enough READS return areas to accommodate the following information: v Larger IFCID 0147 and 0148 records that contain distributed thread data (both allied and database access). v Additional records that are returned when database access threads exist that satisfy the specified qualifications on the READS request. v Log record control intervals with IFCID 129. For more information about using IFI to return log records, see Reading specific log records (IFCID 0129) on page 1126. v Log records based on user-specified criteria with IFCID 306. For example, the user can retrieve compressed or decompressed log records. For more information about reading log records, see Appendix C, Reading log records, on page 1115. v Data descriptions and changed data returned with IFCID 185.
1163
If the return area is too small to hold all the records returned, it contains as many records as will fit. The monitor program obtains the return area for READS requests in its private address space. See Return area on page 1184 for a description of the return area. ifcid-area Contains the IFCIDs of the desired information. The number of IFCIDs can be variable. If the length specification of the IFCID area is exceeded or an IFCID of XFFFF is encountered, the list is terminated. If an invalid IFCID is specified no data is retrieved. See IFCID area on page 1185 for a description of the IFCID area. qual-area This parameter is optional, and is used only on READS requests. It points to the qualification area, where a monitor program can specify constraints on the data that is to be returned. If the qualification area does not exist (length of binary zero), information is obtained from all active allied threads and database access threads. Information is not obtained for any inactive database access threads that might exist. The length constants for the qualification area are provided in the DSNDWQAL mapping macro. If the length is not equal to the value of one of these constants, IFI considers the call invalid. | | | The following trace records, identified by IFCID, cannot be qualified: 0001, 0002, 0106, 0202, 217, 225, 0230. If you attempt to qualify them, the qualification is ignored. The rest of the synchronous records can be qualified. See Synchronous data and READS requests through IFI on page 1173 for information about these records. However, not all the qualifications in the qualification area can be used for these IFCIDs. See Which qualifications are used for READS requests issued through IFI? on page 1172 for qualification restrictions. Unless the qualification area has a length of binary zero (in which case the area does not exist), the address of qual-area supplied by the monitor program points to an area that is formatted by the monitor program, as shown in Table 302 on page 1165.
1164
Administration Guide
Table 302. Qualification area fields. This area is mapped by the assembler mapping macro DSNDWQAL. Name WQALLEN Hex offset 0 Data type Signed two-byte integer Description Length of the qualification area, plus 4. The following constants set the qualification area length field: WQALLN4 When specified, the location name qualifications (WQALLOCN and WQALLUWI), the group buffer pool qualifier (WQALGBPN) and the read log fields are used. When specified, the dynamic statement cache fields (WQALFFLD, WQALFVAL, WQALSTNM, and WQALSTID) are used for READS calls for IFCID 0316 and 0317. When specified, the end-user identification fields (WQALEUID, WQALEUTX, and WQALEUWS) are used for READS calls for IFCID 0124, 0147, 0148, 0149, and 0150. When specified, the location name qualifications (WQALLOCN and WQALLUWI) are ignored. When specified, the location name qualifications (WQALLOCN and WQALLUWI) are used. When specified, the log data access fields (WQALLTYP, WQALLMOD, WQALLRBA, and WQALLNUM) are used for READS calls that use IFCID 129.
WQALLN5
WQALLN6
Signed two-byte integer Character, 4 bytes Address Address Character, 8 bytes Character, 8 bytes Character, 8 bytes Character, 8 bytes
Reserved. Eye catcher for block, WQAL. Thread identification token value. This value indicates the specific thread wanted; binary zero if it is not to be used. Reserved. Plan name; binary zero if it is not to be used. The current primary authorization ID; binary zero if it is not to be used. The original authorization ID; binary zero if it is not to be used. Connection name; binary zero if it is not to be used.
Character, 12 bytes Correlation ID; binary zero if it is not to be used. Character, 32 bytes Resource token for a specific lock request when IFCID 0149 is specified. The field must be set by the monitor program. The monitor program can obtain the information from a previous READS request for IFCID 0150 or from a READS request for IFCID 0147 or 0148. Hex, 4 bytes Resource hash value that specifies the resource token for a specific lock request when IFCID 0149 is specified. The field must be set by the monitor program. The monitor program can obtain the information from a previous READS request for IFCID 0150 or possibly from a READS request for IFCID 0147 or 0148. ASID that specifies the address space of the desired process.
WQALHASH
5C
WQALASID
60
Hex, 2 bytes
1165
Table 302. Qualification area fields (continued). This area is mapped by the assembler mapping macro DSNDWQAL. Name WQALFOPT Hex offset 62 Data type Hex, 1 byte Description Filtering options for IFCID 0150: X'20' X'40' X'80' Return lock information for resources that have local or global waiters. Return lock information only for resources that have one or more interested agents. Return lock information only for resources that have waiters.
# #
# WQALFLGS # # # # # #
63
Hex, 1 byte
Options for 147/148 records: X'40' Active allied agent 147/148 records are not written for this READS request. DDF/RRSAF rollup records are written if WQAL148NR = OFF. DDF/RRSAF rollup 147/148 records are not written for this READS request. Active allied agent records are written if WQAL148NA = OFF.1
X'80'
64 7C
Character, 24 bytes LUWID (logical unit of work ID) of the thread wanted; binary zero if it is not to be used Character, 16 bytes Location name. If specified, data is returned only for distributed agents that originate at the specified location. Example: If site A is located where the IFI program is running and SITE A is specified in the WQALLOCN, database access threads and distributed allied agents that execute at SITE A are reported. Local non-distributed agents are not reported. Example: If site B is specified in the WQALLOCN and the IFI program is still executing at site A, information on database access threads that execute in support of a distributed allied agent at site B are reported. Example: If WQALLOCN is not specified, information on all threads that execute at SITE A (the site where the IFI program executes) is returned. This includes local non-distributed threads, local database access agents, and local distributed allied agents.
WQALLTYP
8C
Character, 3 bytes
Specifies the type of log data access. 'CI ' must be specified to obtain log record control intervals (CIs).
1166
Administration Guide
Table 302. Qualification area fields (continued). This area is mapped by the assembler mapping macro DSNDWQAL. Name WQALLMOD Hex offset 8F Data type Character, 1 byte Description The mode of log data access: 'D' 'F' Return the direct log record specified in WQALLRBA if the IFCID is 0306. Access the first log CI of the restarted DB2 system if the IFCID is 0129. One CI is returned, and the WQALLNUM and WQALLRBA fields are ignored. It indicates to return the first set of qualified log records if the IFCID is 0306. Return the highest LRSN or log RBA in the active log. The value is returned in the field IFCAHLRS in the IFCA. Return the next set of qualified log records. the last partial CI written to the active log is given to the Log Capture Exit. If the last CI written to the log was not full, the RBA of the log CI given to the Log Exit is returned in the IFCAHLRS field of the IFI communication area (IFCA). Otherwise, an RBA of zero is returned in IFCAHLRS. This option ignores WQALLRBA and WQALLNUM. Access the CIs specified by the value in the WQALLRBA field: v If the requested number of complete CIs (as specified in WQALLNUM) are currently available, those CIs are returned. If fewer than the requested number of complete CIs are available, IFI returns as many complete CIs as are available. v If the WQALLRBA value is beyond the end of the active log, IFI returns a return code of X'0000000C' and a reason code of X'00E60855'. No records are returned. v If no complete CIs exist beyond the WQALLRBA value, IFI returns a return code of X'0000000C' and a reason code of X'00E60856'. No records are returned. 'T' WQALLNUM WQALCDCD 90 92 Hex, 2 bytes Character, 1 byte Terminate the log position that is held to anticipate a future mode 'N' call.
'R'
The number of log CIs to be returned. The valid range is X'0001' to X'0007'. Data description request flag: 'A' Indicates that a data description will only be returned the first time a DATA request is issued from the region or when it was changed for a given table. This is the default. Indicates that a data description is not returned. Indicates that a data description will be returned for each table in the list for every new request.
Reserved. If the IFCID is 0129, this is the starting log RBA of the CI to be returned. The CI starting log RBA value must end in X'000'. The RBA value must be right-justified. If the IFCID is 0306, this is the log RBA or LRSN to be used in mode 'F'.
Appendix E. Programming for the Instrumentation Facility Interface (IFI)
1167
Table 302. Qualification area fields (continued). This area is mapped by the assembler mapping macro DSNDWQAL. Name WQALGBPN Hex offset 9C Data type Character, 8 bytes Description Group buffer pool name for IFCID 0254. Buffer pool name for IFCID 0199. To specify a single buffer pool or group buffer pool, specify the buffer pool name in hexadecimal, followed by hexadecimal blanks. Example: To specify buffer pool BP1, put X'C2D7F14040404040' in this field. To specify more than one buffer pool or group buffer pool, use the pattern-matching character X'00' in any position in the buffer pool name. X'00' indicates that any character can appear in that position, and in all positions that follow. Example: If you put X'C2D7F10000000000' in this field, you indicate that you want data for all buffer pools whose names begin with BP1, so IFI collects data for BP1, BP10 through BP19, and BP16K0 through BP16K9. Example: If you put X'C2D700F100000000' in this field, you indicate that you want data for all buffer pools whose names begin with BP, so IFI collects data for all buffer pools. IFI ignores X'F1' in position four because it occurs after the first X'00'. WQALLCRI A4 Hex, 1 byte Log Record Selection Criteria: '00' WQALLOPT A5 Hex, 1 byte Indicates the return DB2CDC and UR control log records.
Processing Options relating to decompression: '00' '01' Indicates that decompression should not occur. Indicates to decompress the log records if they are compressed.
1168
Administration Guide
Table 302. Qualification area fields (continued). This area is mapped by the assembler mapping macro DSNDWQAL. Name WQALFLTR Hex offset A6 Data type Hex, 1 byte Description For an IFCID 0316 request, WQALFLTR identifies the filter method: X'00' Indicates no filtering. This value tells DB2 to return information for as many cached statements as fit in the return area. Indicates that DB2 returns information about the cached statements that have the highest values for a particular statistics field. The statistics field is specified in WQALFFLD. DB2 returns information for as many statements as fit in the return area. Example: If the return is large enough for information about 10 statements, the statements with the ten highest values for the specified statistics field are reported. X'02' Indicates that DB2 returns information about the cached statements that exceed a threshold value for a particular statistics field. The name of the statistics field is specified in WQALFFLD. The threshold value is specified in WQALFVAL. DB2 returns information for as many qualifying statements as fit in the return area.
X'01'
Indicates that DB2 returns information about a single cached statement. The application provides the four-byte cached statement identifier in field WQALSTID. An IFCID 0316 request with this qualifier is intended for use with IFCID 0172 or IFCID 0196, to obtain information about the statements that are involved in a timeout or deadlock. For an IFCID 0317 request, WQALFLTR identifies the filter method: X'04' Indicates that DB2 returns information about a single cached statement. The application provides the four-byte cached statement identifier in field WQALSTID. An IFCID 0317 request with this qualifier is intended for use with IFCID 0172 or IFCID 0196, to obtain information about the statements that are involved in a timeout or deadlock. For an IFCID 0306 request, WQALFLTR indicates whether DB2 merges log records in a data sharing environment: X'04' X'00' X'03' Indicates that DB2 merges log records from data sharing members. Indicates that DB2 does not merge log records from data sharing members.
1169
Table 302. Qualification area fields (continued). This area is mapped by the assembler mapping macro DSNDWQAL. Name WQALFFLD Hex offset A7 Data type Character, 1 byte Description For an IFCID 0316 request, when WQALFLTR is X'01' or X'02', this field specifies the statistics field that is used to determine the cached statements about which DB2 reports. You can enter the following values: 'A' 'B' 'C' 'E' 'G' 'I' 'L' 'P' 'R' 'S' 'T' 'W' 'X' The accumulated elapsed time (QW0316AE). This option is valid only when QWALFLTR=X'01'. The number of buffer reads (QW0316NB). The accumulated CPU time (QW0316CT). This option is valid only when QWALFLTR=X'01'. The number of executions of the statement (QW0316NE). The number of GETPAGE requests (QW0316NB). The number of index scans (QW0316NI). The number of parallel groups (QW0316NL). The number of rows processed (QW0316NP). The number of rows examined (QW0316NR). The number of sorts performed (QW0316NS). The number of table space scans (QW0316NT). The number of buffer writes (QW0316NW). The number of times that a RID list was not used because the number of RIDs would have exceeded one or more internal DB2 limits (QW0316RT). The number of times that a RID list was not used because not enough storage was available (QW0316RS). The accumulated wait time for synchronous I/O (QW0316W1). This option is valid only when QWALFLTR=X'01'. The accumulated wait time for lock and latch requests (QW0316W2). This option is valid only when QWALFLTR=X'01'. The accumulated wait time for a synchronous execution unit switch (QW0316W3). This option is valid only when QWALFLTR=X'01'. The accumulated wait time for global locks (QW0316W4). This option is valid only when QWALFLTR=X'01'. The accumulated wait time for read activity by another thread (QW0316W5). This option is valid only when QWALFLTR=X'01'. The accumulated wait time for write activity by another thread (QW0316W6). This option is valid only when QWALFLTR=X'01'.
'Y' '1'
'2'
'3'
'4' '5'
'6'
1170
Administration Guide
Table 302. Qualification area fields (continued). This area is mapped by the assembler mapping macro DSNDWQAL. Name WQALFVAL Hex offset A8 Data type Signed 4-byte integer Description For an IFCID 0316 request, when WQALFLTR is X'02', this field and WQALFFLD determine the cached statements about which DB2 reports. To be eligible for reporting, a cached statement must have a value for WQALFFLD that is no smaller than the value that you specify in WQALFVAL. DB2 reports information on as many eligible statements as fit in the return area. WQALSTNM AC Character, 16 bytes For an IFCID 0317 request, when WQALFLTR is not X'04', this field specifies the name of a cached statement about which DB2 reports. This is a name that DB2 generates when it caches the statement. To obtain this name, issue a READS request for IFCID 0316. The name is in field QW0316NM. This field and WQALSTID uniquely identify a cached statement. Unsigned 4-byte integer For an IFCID 0316 or IFCID 0317 request, this field specifies the ID of a cached statement about which DB2 reports. DB2 generates this ID when it caches the statement. To obtain the ID, use the following options: v For an IFCID 0317 request, when WQALFLTR is not X'04', obtain this ID by issuing a READS request for IFCID 0316. The ID is in field QW0316TK. This field and WQALSTNM uniquely identify a cached statement. v For an IFCID 0316 or IFCID 0317 request, when WQALFLTR is X'04', obtain this ID by issuing a READS request for IFCID 0172 or IFCID 0196. The ID is in field QW0172H9 (cached statement ID for the holder in a deadlock), QW0172W9 (cached statement ID for the waiter in a deadlock), or QW0196H9 (cached statement ID of the holder in a timeout). This field uniquely identifies a cached statement. WQALEUID C0 Character, 16 bytes The end user's workstation user ID. This value can be different from the authorization ID that is used to connect to DB2. This field contains binary zeroes if the client does not supply this information. Character, 32 bytes The name of the transaction or application that the end user is running. This value identifies the application that is currently running, not the product that is used to run the application. This field contains binary zeroes if the client does not supply this information. Character, 18 bytes The end user's workstation name. This value can be different from the authorization ID used to connect to DB2. This field contains binary zeroes if the client does not supply this information.
WQALSTID
BC
WQALEUTX
D0
WQALEUWS
F0
# # # # # # # # # # # #
Note: 1. The only valid filters for DDF/RRSAF 147/148 rollup records are WQALEUID, WQALEUTX, and WQALEUWN. For a 147/148 request, DDF/RRSAF records are not processed if any of the following WQAL fields are not X'00': v WQALACE v WQALAUTH v WQALOPID v WQALPLAN v WQALCORR v WQALLUWI v WQALLOCN v WQALASID v WQALCONN
Appendix E. Programming for the Instrumentation Facility Interface (IFI)
1171
Important: If your monitor program does not initialize the qualification area, the READS request is denied.
Which qualifications are used for READS requests issued through IFI?
Not all qualifications are used for all IFCIDs. Table 303 lists the qualification fields that are used for each IFCID.
Table 303. Qualification fields for IFCIDs These IFCIDs... 0124, 0147, 0148, 0150 Are allowed to use these qualification fields WQALACE WQALAIT2 WQALPLAN1 WQALAUTH1 WQALOPID1 WQALCONN1 WQALCORR1 WQALASID WQALLUWI1 WQALLOCN1 WQALEUID WQALEUTX WQALEUWS WQALLTYP WQALLMOD WQALLRBA WQALLNUM WQALREST WQALHASH WQALFOPT WQALCDCD WQALGBPN2 WQALFLTR WQALLMOD WQALLRBA WQALLCRI WQALLOPT WQALFLTR WQALFFLD WQALFVAL WQALSTID WQALFLTR WQALSTNM WQALSTID
0129
0316
0317
1172
Administration Guide
Table 303. Qualification fields for IFCIDs (continued) These IFCIDs... Are allowed to use these qualification fields
Note: 1. DB2 allows you to partially qualify a field and fill the rest of the field with binary zero. For example, the 12-byte correlation value for a CICS thread contains the 4-character CICS transaction code in positions 5-8. Assuming a CICS transaction code of AAAA, the following hexadecimal qual-area correlation qualification can be used to find the first transaction with a correlation value of AAAA in positions 5-8: X'00000000C1C1C1C100000000'. 2. X'00' in this field indicates a pattern-matching character. X'00' in any position of the field indicates that IFI collects data for buffer pools whose names contain any character in that position and all following positions.
1173
v Usage information by LPAR, DB2 subsystem, or DB2 address space Statistical data on the database services address space. Static system parameters. An active SQL snapshot that provides status information about: v The process v The SQL statement text v The relational data system input parameter list (RDI) block v Certain bind and locking information You can obtain a varying amount of data because the request requires the process to be connected to DB2, have a cursor table allocated (RDI and status information is provided), and be active in DB2 (SQL text is provided if available). The SQL text that is provided does not include the SQL host variables. For dynamic SQL, IFI provides the original SQL statement. The RDISTYPE field contains the actual SQL function taking place. For example, for a SELECT statement, the RDISTYPE field can indicate that an open cursor, fetch, or other function occurred. For static SQL, you can see the DECLARE CURSOR statement, and the RDISTYPE indicates the function. The RDISTYPE field is mapped by mapping macro DSNXRDI. 0129 Returns one or more VSAM control intervals (CIs) that contain DB2 recovery log records. For more information about using IFI to return these records for use in remote site recovery, see Appendix C, Reading log records, on page 1115. An active thread snapshot that provides a status summary of processes at a DB2 thread or non-thread level. An active thread snapshot that provides more detailed status of processes at a DB2 thread or non-thread level. Information that indicates who (the thread identification token) is holding locks and waiting for locks on a particular resource and hash token. The data is in the same format as IFCID 0150. All the locks held and waited on by a given user or owner (thread identification token). Data descriptions for each table for which captured data is returned on this DATA request. IFCID 0185 data is only available through a propagation exit routine that is triggered by DB2. Information about buffer pool usage by DB2 data sets. DB2 reports this information for an interval that you specify in the DATASET STATS TIME field of installation panel DSNTIPN. At the beginning of each interval, DB2 resets these statistics to 0. Dynamic system parameters. Storage detail record for the DBM1 address space. Storage summary record for the DBM1 address space. Global statistics for data sharing. User authorization information. Group buffer pool usage in the data sharing group.
0150 0185
0199
1174
Administration Guide
0306
Returns compressed or decompressed log records in both a data sharing or non data-sharing environment. For IFCID 306 requests, your program's return area must reside in ECSA key 7 storage with the IFI application program running in key 0 supervisor state. The IFI application program must set the eye catcher to I306 before making the IFCID 306 call. See Instrument facility communications area (IFCA) on page 1180 for more information about the instrumentation facility communication area (IFCA) and what is expected of the monitor program. Returns information about the contents of the dynamic statement cache. The IFI application can request information for all statements in the cache, or provide qualification parameters to limit the data returned. DB2 reports the following information about a cached statement: v A statement name and ID that uniquely identify the statement v If IFCID 0318 is active, performance statistics for the statement v The first 60 bytes of the statement text Returns the complete text of an SQL statement in the dynamic statement cache and the PREPARE attributes string. You must provide the statement name and statement ID from IFCID 0316 output. For more information about using IFI to obtain information about the dynamic statement cache, see Using READS calls to monitor the dynamic statement cache.
0316
0317 #
For more information about IFCID field descriptions, see the mapping macros in prefix.SDSNMACS. See also DB2 trace on page 1195 and Appendix D, Interpreting DB2 trace output, on page 1139 for additional information.
1175
For a statement with unexpected statistics values: a. Obtain the statement name and statement ID from the IFCID 0316 data. b. Set up the qualification area for a READS call for IFCID 0317, as described in Table 302 on page 1165. c. Set up the IFCID area to request data for IFCID 0317. d. Issue a READS call for IFCID 0317 to get the entire text of the statement. e. Obtain the statement text from the return area. f. Use the statement text to execute an SQL EXPLAIN statement. g. Fetch the EXPLAIN results from the PLAN_TABLE. 9. Issue an IFI COMMAND call to stop monitor trace class 1. 10. Issue an IFI COMMAND call to stop performance trace class 30 for IFCID 0318. An IFI program that monitors deadlocks and timeouts of cached statements should include these steps: 1. Acquire and initialize storage areas for common IFI communication areas. 2. Issue an IFI COMMAND call to start monitor trace class 1. This step lets you make READS calls for IFCID 0316 and IFCID 0317. 3. Issue an IFI COMMAND call to start performance trace class 30 for IFCID 0318. This step enables statistics collection for statements in the dynamic statement cache. See Controlling collection of dynamic statement cache statistics with IFCID 0318 for information on when you should start a trace for IFCID 0318. 4. Start performance trace class 3 for IFCID 0172 to monitor deadlocks, or performance trace class 3 for IFCID 0196 to monitor timeouts. 5. Put the IFI program into a wait state. During this time, SQL applications in the subsystem execute dynamic SQL statements by using the dynamic statement cache. 6. Resume the IFI program when a deadlock or timeout occurs. 7. Issue a READA request to obtain IFCID 0172 or IFCID 0196 trace data. 8. Obtain the cached statement ID of the statement that was involved in the deadlock or timeout from the IFCID 0172 or IFCID 0196 trace data. Using the statement ID, set up the qualification area for a READS call for IFCID 0316 or IFCID 0317, as described in Table 302 on page 1165. 9. Set up the IFCID area to request data for IFCID 0316 or IFCID 0317. 10. 11. 12. 13. Issue an IFI READS call to retrieve the qualifying cached SQL statement. Examine the contents of the return area. Issue an IFI COMMAND call to stop monitor trace class 1. Issue an IFI COMMAND call to stop performance trace class 30 for IFCID 0318 and performance trace class 3 for IFCID 0172 or IFCID 0196.
1176
Administration Guide
If you issue a READS call for IFCID 0316 while IFCID 0318 is inactive, DB2 returns identifying information for all statements in the cache, but returns 0 in all the IFCID 0316 statistics counters. When you stop or start the trace for IFCID 0318, DB2 resets the IFCID 0316 statistics counters for all statements in the cache to 0.
ifca Contains information about the OPn destination and the ownership token value (IFCAOWNR) at call initiation. After the READA call completes, the IFCA contains the return code, reason code, the number of bytes moved to the return area, the number of bytes not moved to the return area if the area was too small, and the number of records lost. See Common communication areas for IFI calls on page 1180 for a description of the IFCA. return-area Contains the varying-length records that are returned by the instrumentation facility. If the return area is too small, as much of the output as will fit is placed into the area (a complete varying-length record). Reason code 00E60802 is returned in cases where the monitor program's return area is not large enough to hold the returned data. See Return area on page 1184 for a description of the return area. IFI allocates up to eight OP buffers upon request from storage above the line in extended CSA. IFI uses these buffers to store trace data until the owning application performs a READA request to transfer the data from the OP buffer to the application's return area. An application becomes the owner of an OP buffer when it issues a START TRACE command and specifies a destination of OPN or OPX. Each buffer can be of size 256 KB to 16 MB. IFI allocates a maximum of 16 MB of storage for each of the eight OP buffers. The default monitor buffer size is determined by the MONSIZE parameter in the DSNZPARM module.
# #
1177
For example, the monitor program can pass a specific online performance monitor destination (OP1, for example) on the START TRACE command to start asynchronous trace data collection. If the monitor program passes a generic destination of OPX, the instrumentation facility assigns the next available buffer destination slot and returns the OPn destination name to the monitor program. To avoid conflict with another trace or program that might be using an OP buffer, you should use the generic OPX specification when you start tracing. You can then direct the data to the destination specified by the instrumentation facility with the START or MODIFY TRACE commands. There are times, however, when you should use a specific OPn destination initially: v When you plan to start numerous asynchronous traces to the same OPn destination. To do this, you must specify the OPn destination in your monitor program. The OPn destination started is returned in the IFCA. v When the monitor program specifies that a particular monitor class (defined as available) together with a particular destination (for example OP7) indicates that certain IFCIDs are started. An operator can use the DISPLAY TRACE command to determine which monitors are active and what events are being traced. Buffering data: To have trace data go to the OPn buffer, you must start the trace from within the monitor program. After the trace is started, DB2 collects and buffers the information as it occurs. The monitor program can then issue a read asynchronous (READA) request to move the buffered data to the monitor program. The buffering technique ensures that the data is not being updated by other users while the buffer is being read by the READA caller. For more information, see Data integrity and IFI on page 1189. Possible data loss: You can activate all traces and have the trace data buffered. However, this plan is definitely not recommended because performance might suffer and data might be lost. Data loss occurs when the buffer fills before the monitor program can obtain the data. DB2 does not wait for the buffer to be emptied, but, instead, informs the monitor program on the next READA request (in the IFCARLC field of the IFCA) that the data has been lost. The user must have a high enough dispatching priority that the application can be posted and then issue the READA request before significant data is lost.
1178
Administration Guide
the buffer on a timely basis. One method is to set a timer to wake up and process the data. Another method is to use the buffer information area on a START TRACE command request, shown in Table 301 on page 1162, to specify an ECB address to post when a specified number of bytes have been buffered.
The write function must specify an IFCID area. The data that is written is defined and interpreted by your site. ifca Contains information regarding the success of the call. See Instrument facility communications area (IFCA) on page 1180 for a description of the IFCA.
output-area Contains the varying-length of the monitor program's data record to be written. See Output area on page 1185 for a description of the output area. ifcid-area Contains the IFCID of the record to be written. Only the IFCIDs that are defined to the write function (see Table 304 on page 1180) are allowed. If
Appendix E. Programming for the Instrumentation Facility Interface (IFI)
1179
an invalid IFCID is specified or the IFCID is not active (not started by a TRACE command), no data is written. See Table 304 for IFCIDs that can be used by the write function.
Table 304. Valid IFCIDs for WRITE function IFCID (decimal) 0146 0151 0152 0153 0154 0155 0156 IFCID (hex) 0092 0097 0098 0099 009A 009B 009C Trace type Auditing Accounting Statistics Performance Performance Monitoring Serviceability Class 9 4 2 1 15 4 6 Comment Write to IFCID 146 Write to IFCID 151 Write to IFCID 152 Background events and write to IFCID 153 Write to IFCID 154 Write to IFCID 155 Reserved for user-defined serviceability trace
See IFCID area on page 1185 for a description of the IFCID area.
1180
Administration Guide
Character, 4 bytes Eye catcher for block, IFCA. Character, 4 bytes Owner field, provided by the monitor program. This value is used to establish ownership of an OPn destination and to verify that a requester can obtain data from the OPn destination. This is not the same as the owner ID of a plan. 4-byte signed integer Return code for the IFI call. Binary zero indicates a successful call. See Part 3 of DB2 Codes for information about reason codes. For a return code of 8 from a COMMAND request, the IFCAR0 and IFCAR15 values contain more information. Reason code for the IFI call. Binary zero indicates a successful call. See Part 3 of DB2 Codes for information about reason codes. Number of bytes moved to the return area. A non-zero value in this field indicates information was returned from the call. Only complete records are moved to the monitor program area. Number of bytes that did not fit in the return area and still remain in the buffer. Another READA request will retrieve that data. Certain IFI requests return a known quantity of information. Other requests will terminate when the return area is full. Reserved. Indicates the number of records lost prior to a READA call. Records are lost when the OP buffer storage is exhausted before the contents of the buffer are transferred to the application program via an IFI READA request. Records that do not fit in the OP buffer are not written and are counted as records lost.
IFCARC1
IFCARC2 IFCABM
10 14
IFCABNM
18
1C IFCARLC 20
IFCAOPN
24
Character, 4 bytes Destination name used on a READA request. This field identifies the buffer requested, and is required on a READA request. Your monitor program must set this field. The instrumentation facility fills in this field on START TRACE to an OPn destination from an monitor program. If your monitor program started multiple OPn destination traces, the first one is placed in this field. If your monitor program did not start an OPn destination trace, the field is not modified. The OPn destination and owner ID are used on subsequent READA calls to find the asynchronous buffer. 2-byte signed integer 2-byte signed integer Length of the OPn destinations started. On any command entered by IFI, the value is set to X'0004'. If an OPn destination is started, the length is incremented to include all OPn destinations started. Reserved.
IFCAOPNL
28
2A IFCAOPNR 2C
1181
Table 305. Instrumentation facility communication area (continued). The IFCA is mapped by assembler mapping macro DSNDIFCA. Name IFCATNOL Hex offset 4C Data type 2-byte signed integer 2-byte signed integer Description Length of the trace numbers plus 4. On any command entered by IFI the value is set to X'0004'. If a trace is started, the length is incremented to include all trace numbers started. Reserved.
4E IFCATNOR 50
Character, 8 fields Space to hold up to eight EBCDIC trace numbers that were started. of 2 bytes each. The trace number is required if the MODIFY TRACE command is used on a subsequent call. Hex, 2 bytes Hex, 2 bytes Length of diagnostic information. Reserved.
IFCADL
60 62
1182
Administration Guide
Table 305. Instrumentation facility communication area (continued). The IFCA is mapped by assembler mapping macro DSNDIFCA. Name IFCADD Hex offset 64 Data type Character, 80 bytes Description Diagnostic information. v IFCAFCI, offset 64, 6 bytes This contains the RBA of the first CI in the active log if IFCARC2 is 00E60854. See Reading specific log records (IFCID 0129) on page 1126 for more information. v IFCAR0, offset 6C, 4 bytes For COMMAND requests, this field contains -1 or the return code from the component that executed the command. v IFCAR15, offset 70, 4 bytes For COMMAND requests, this field contains one of the following values: 0 4 8 12 16 The command completed successfully. Internal error. The command was not processed because of errors in the command. The component that executed the command returned the return code in IFCAR0. An abend occurred during command processing. Command processing might be incomplete, depending on when the error occurred. See IFCAR0 for more information. Response buffer storage was not available. The command completed, but no response messages are available. See IFCAR0 for more information. Storage was not available in the DSNMSTR address space. The command was not processed. CSA storage was not available. If a response buffer is available, the command might have partially completed. See IFCAR0 for more information. The user is not authorized to issue the command. The command was not processed.
20
24 28
32
v IFCAGBPN, offset 74, 8 bytes This is the group buffer pool name in error if IFCARC2 is 00E60838 or 00E60860 v IFCABSRQ, offset 88, 4 bytes This is the size of the return area required when the reason code is 00E60864. v IFCAHLRS, offset 8C, 6 bytes This field can contain the highest LRSN or log RBA in the active log (when WQALLMOD is 'H'). Or, it can contain the RBA of the log CI given to the Log Exit when the last CI written to the log was not full, or an RBA of zero (when WQALLMOD is 'P'). IFCAGRSN 98 Four-byte signed integer Reason code for the situation in which an IFI calls requests data from members of a data sharing group, and not all the data is returned from group members. See Part 3 of DB2 Codes for information about reason codes.
1183
Table 305. Instrumentation facility communication area (continued). The IFCA is mapped by assembler mapping macro DSNDIFCA. Name IFCAGBM IFCAGBNM IFCADMBR Hex offset 9C A0 A4 Data type Four-byte signed integer Four-byte signed integer Description Total length of data that was returned from other data sharing group members and fit in the return area. Total length of data that was returned from other data sharing group members and did not fit in the return area..
Character, 8 bytes Name of a single data sharing group member on which an IFI request is to be executed. Otherwise, this field is blank. If this field contains a member name, DB2 ignores field IFCAGLBL. Character, 8 bytes Name of the data sharing group member from which data is being returned. DB2 sets this field in each copy of the IFCA that it places in the return area, not in the IFCA of the application that makes the IFI request.
IFCARMBR
AC
Return area
You must specify a return area on all READA, READS, and COMMAND requests. IFI uses the return area to return command responses, synchronous data, and asynchronous data to the monitor program. Table 306describes the return area.
Table 306. Return area Hex offset 0 Data type Signed 4-byte integer Description The length of the return area, plus 4. This must be set by the monitor program. No limit exists for the length of READA or READS return areas. DB2 places as many varying-length records as it can fit into the area following the length field. The monitor programs length field is not modified by DB2. Each varying-length trace record has a 2-byte or 4-byte length field, depending on the high-order bit. if the high-order bit is on, the length field is 4 bytes. If the high-order bit is off, the length field is the first 2 bytes. In this case, the third byte indicates whether the record is spanned, and the fourth byte is reserved. After a COMMAND request, the last character in the return area is a new-line character (X'15').
# # # 4 # # # # # # # # #
Character, varying-length
Note: For more information about reading log records, see Appendix C, Reading log records, on page 1115
The destination header for data that is returned on a READA or READS request is mapped by macro DSNDQWIW or the header QW0306OF for IFCID 306 requests. Please refer to prefix.SDSNIVPD(DSNWMSGS) for the format of the trace record
1184
Administration Guide
and its header. The size of the return area for READA calls should be as large the size specified with the BUFSIZE keyword on the START TRACE command. Data returned on a COMMAND request consists of varying-length segments (X'xxxxrrrr' where the length is 2 bytes and the next 2 bytes are reserved), followed by the message text. More than one record can be returned. The last character in the return area is a new-line character (X'15'). The monitor program must compare the number of bytes moved (IFCABM in the IFCA) to the sum of the record lengths to determine when all records have been processed.
IFCID area
You must specify the IFCID area on READS and WRITE requests. The IFCID area contains the IFCIDs to process. Table 308 shows the IFCID area.
Table 308. IFCID area Hex Offset 0 Data type Signed two-byte integer Description Length of the IFCID area, plus 4. The length can range from X'0006' to X'0044'. For WRITE requests, only one IFCID is allowed, so the length must be set to X'0006'. For READS requests, you can specify multiple IFCIDs. If so, you must be aware that the returned records can be in a different sequence than requested and some records can be missing. 2 4 Signed two-byte integer Hex, n fields of 2 bytes each Reserved. The IFCIDs to be processed. Each IFCID is placed contiguous to the previous IFCID for a READS request. The IFCIDs start at X'0000' and progress upward. You can use X'FFFF' to signify the last IFCID in the area to process.
Output area
The output area is used on command and WRITE requests. The first two bytes contain the length of the monitor programs record to write or the DB2 command to be issued, plus 4 additional bytes. The next two bytes are reserved. You can specify any length from 10 to 4096 (X'000A0000' to X'10000000'). The rest of the area is the actual command or record text. Example: In an assembler program, a START TRACE command is formatted in the following way:
DC DC X'002A0000' LENGTH INCLUDING LL00 + COMMAND CL37'-STA TRACE(MON) DEST(OPX) BUFSIZE(256)'
1185
IFCAGLBL Set this flag on to indicate that the READS or READA request should be sent to all members of the data sharing group. IFCADMBR If you want an IFI READS, READA, or COMMAND request to be executed at a single member of the data sharing group, assign the name of the group member to this field. If you specify a name in this field, DB2 ignores IFCAGLBL. If the name that you specify is not active when DB2 executes the IFI request, DB2 returns an error. Recommendation: To issue a DB2 command that does not support SCOPE(GROUP) at another member of a data sharing group, set the IFCADMBR field and issue an IFI COMMAND. IFCARMBR The name of the data sharing member that generated the data that follows the IFCA. DB2 sets this value in the copy of the IFCA that it places in the requesting program's return area. IFCAGRSN A reason code that DB2 sets when not all data is returned from other data sharing group members. See Part 3 of DB2 Codes for specific reason codes. IFCAGBM The number of bytes of data that other members of the data sharing group return and that the requesting program's return area can contain. IFCAGBNM The number of bytes of data that members of the data sharing group return but that the requesting program's return area cannot contain. As with READA or READS requests for single DB2 subsystems, you need to issue a START TRACE command before you issue the READA or READS request. You can issue START TRACE with the parameter SCOPE(GROUP) to start the trace at all members of the data sharing group. For READA requests, specify DEST(OPX) in the START TRACE command. DB2 collects data from all data sharing members and returns it to the OPX buffer for the member from which you issue the READA request. If a new member joins a data sharing group while a trace with SCOPE(GROUP) is active, the trace starts at the new member. After you issue a READS or READA call for all members of a data sharing group, DB2 returns data from all members in the requesting program's return area. Data from the local member is first, followed by the IFCA and data for all other members. Example: If the local DB2 is called DB2A, and the other two members in the group are DB2B and DB2C, the return area looks like this:
Data IFCA Data IFCA Data for for for for for DB2A DB2B (DB2 sets IFCARMBR to DB2B) DB2B DB2C (DB2 sets IFCARMBR to DB2C) DB2C
If an IFI application requests data from a single other member of a data sharing group (IFCADMBR contains a member name), the requesting program's return area
1186
Administration Guide
contains the data for that member but no IFCA for the member. All information about the request is in the requesting program's IFCA. Because a READA or READS request for a data sharing group can generate much more data than a READA or READS request for a single DB2, you need to increase the size of your return area to accommodate the additional data.
1187
DFSERA10 - PRINT PROGRAM . . . A B 000000 05A80000 00000510 000020 01160001 00000324 000040 000060 000080 0000A0 . . . 000320 000340 000360 000380 0003A0 0003C0 0003E0 000400 000420 000440 000460 000480 0004A0 0004C0 0004E0 000500 000520 000540 000560 000580 0005A0 00010001 00640064 00000000 00002000 000004E0 000A0028 00000000 0005003C
C 00980001 00000054 01B00001 000004D4 00000000 003D0000 00000000 0028F040 000004E0 0000A000 C1C4D4C6 40404040
00B80001 0000010C 00000000 000004D4 D 00300001 80000018 00033000 00033000 F0F0F140 F0F20080 40404040 40404040
01000001 0000020C 00080001 000004DC 00000010 00010000 00003084 40404040 000003E8 E0000000 00000000 40404040
B0000000 00400280 E2C5D940 00FA0000 000003E8 E2E8E2C1 00000080 00003000 0000000A 00000000 30C4E2D5 C7C9E2E3 00020000 00000000 00160030 E5C54040 A6E9C7D5 6DD3C1C2 E2E8E2C1 C4C3D340 00000000
202701D0 C5E2E8E2 C9D9D3D4 00007D00 012C0000 C4D44040 0005000A 00007800 00000020 00000000 D9C7C3D6 C5D96DD6 00001000 00000000 C6C1C340 00000000 EBDB1104 C4C2F2D5 C4D44040 E2E8E2C1 00000000
E2D7D9D4 C1C4D440 D7D9D6C3 000A0014 0000000E E2E8E2D6 13880078 00000001 00000019 00000000 D3C4E2D5 C2D1E380 40000000 00000000 00010000 00000000 00000008 C5E34040 D4D7C9E3 C4D44040
D2C4C4F0 40000000 C9D9D3D4 00050028 000A01F4 D7D94040 0008000A 000007D0 00000000 00000000 6DD9C5C7 C4E2D5D9 00000000 F1F161F1 C4C4C640 00000000 00000002 D3E4D5C4 E2F14040 00000001
F0F1F940 000E1000 0000003C 000E0002 00FA0000 E2E8E2D6 00040004 00040400 0005000A 00000000 C9E2E3C5 C7C6C4C2 00000000 F361F9F2 40404040 E 004C011A 00000001 F0404040 40404040 00000000
01980064 000001BC 0000012C 00080008 00000032 D7D94040 00040005 00780078 0006000A 00000000 D96DC1D7 000009FD 00000000 C4E2D5C3 C1800002 F 006A0A31 E2C1D5E3 A6E9C7D2 C2C1E3C3 00000000
00000000 000001B0 0000000A 00400077 000003E8 000A0080 0001000A 00010003 00640064 00000000 D7D3C4E2 C5000000 00000000 F3F1F040 00000000 00B45B78 C16DE3C5 E73C0001 C8404040 00000000
E7C14000 C9C2D4E4 8080008C 00000514 00002710 00140000 00020005 00019000 00040063 00000000 D56DD9C5 00001060 00000000 80000000 C1C3E3C9 E2E2D6D7 D9C5E2C1 004C0200 C4E2D5C5 00000000
Description Length of record. The next two bytes are reserved. Offset to product section standard header. Offset to first data section. Beginning of first data section. Beginning of product section standard header. IFCID (decimal 106).
Figure 166. Example of IFI return area after READS request (IFCID 106). This output was assembled by a user-written routine and printed with the DFSERA10 print program of IMS.
For more information about IFCIDs and mapping macros, see DB2 trace on page 1195 and Appendix D, Interpreting DB2 trace output, on page 1139.
1188
Administration Guide
DFSERA10 - PRINT PROGRAM . . . A B 000000 007E0000 0000007A 000020 40E2E3C1 D9E3C5C4 E 000040 F0F24015 003A0001 000060 C1D9E340 E3D9C1C3
Figure 167. Example of IFI return area after a START TRACE command. This output was assembled with a user-written routine and printed with DFSERA10 program of IMS. Figure label A B C D E F 007E0000 0000007A 003C C4E2D5E6 003A C4E2D5F9 Description Field entered by print program Length of return area Length of record (003C). The next two bytes are reserved. Beginning of first message Length of record. The next two bytes are reserved. Beginning of second message
The IFCABM field in the IFCA would indicate that X'00000076' ( C + E ) bytes have been moved to the return area.
1189
v When READS and READA requests are checked for authorization, short duration locks on the DB2 catalog are obtained. When the check is made, subsequent READS or READA requests are not checked for authorization. Remember, if you are using the access control exit routine, then that routine might be controlling the privileges that the monitor trace can use. v When DB2 commands are submitted, each command is checked for authorization. DB2 database commands obtain additional locks on DB2 objects. A program can issue SQL statements through an attachment facility and DB2 commands through IFI. This environment creates the potential for an application to deadlock or time-out with itself over DB2 locks acquired during the execution of SQL statements and DB2 database commands. You should ensure that all DB2 locks acquired by preceding SQL statements are no longer held when the DB2 database command is issued. You can do this by: v Binding the DB2 plan with ACQUIRE(USE) and RELEASE(COMMIT) bind parameters v Initiating a commit or rollback to free any locks your application is holding, before issuing the DB2 command If you use SQL in your application, the time between commit operations should be short. For more information on locking, see Chapter 31, Improving concurrency, on page 813.
1190
Administration Guide
IMS log records DB2 trace facility* DB2 RUNSTATS utility* DB2 STOSPACE utility* DB2 EXPLAIN statement* DB2 DISPLAY command* DB2 catalog queries* CICS attachment Statistics* RMF
Online
1191
Table 310. Monitoring tools in a DB2 environment (continued) Monitoring tool CICS Monitoring Facility (CMF) Description Provides performance information about each CICS transaction executed. It can be used to investigate the resources used and the time spent processing transactions. Be aware that overhead is significant when CMF is used to gather performance information. Help you determine when to reorganize table spaces and indexes. See the description of the REORG utility in Part 2 of DB2 Utility Guide and Reference. Can monitor and report DB2 server-elapsed time for client applications that access DB2 data. See Reporting server-elapsed time on page 1021. Gives you information about the status of threads, databases, buffer pools, traces, allied subsystems, applications, and the allocation of tape units for the archive read process. For information about the DISPLAY BUFFERPOOL command, see Monitoring and tuning buffer pools using online commands on page 681. For information about using the DISPLAY command to monitor distributed data activity, see The DISPLAY command on page 1017. For the detailed syntax of each command, refer to Chapter 2 of DB2 Command Reference. Provides information about the access paths used by DB2. See Chapter 34, Using EXPLAIN to improve SQL performance, on page 931 and Chapter 5 of DB2 SQL Reference. A licensed program that integrates the function of DB2 Buffer Pool Analyzer and DB2 Performance Monitor. OMEGAMON provides performance monitoring, reporting, buffer pool analysis, and a performance warehouse, all in one tool. OMEGAMON monitors all subsystem instances across many different platforms in a consistent way. You can use OMEGAMON to analyze DB2 trace records and optimize buffer pool usage. See OMEGAMON on page 1202 for more information. An orderable feature of DB2 that you can use to analyze DB2 trace records. As indicated previously, OMEGAMON includes the function of DB2 PM. OMEGAMON is described under OMEGAMON on page 1202. Can report space use and access path statistics in the DB2 catalog. See Gathering monitor statistics and update statistics on page 916 and Part 2 of DB2 Utility Guide and Reference. Provides information about the actual space allocated for storage groups, table spaces, table space partitions, index spaces, and index space partitions. See Part 2 of DB2 Utility Guide and Reference. Provides DB2 performance and accounting information. It is described under DB2 trace on page 1195. A z/OS service aid that collects information to analyze particular situations. GTF can also be used to analyze seek times and Supervisor Call instruction (SVC) usage, and for other services. See Recording GTF trace data on page 1201 for more information.
DB2 Connect
| |
1192
Administration Guide
Table 310. Monitoring tools in a DB2 environment (continued) Monitoring tool IMS DFSUTR20 utility IMS Fast Path Log Analysis utility (DBFULTA0) IMS Performance Analyzer (IMS PA) Description A print utility for IMS Monitor reports. An IMS utility that provides performance reports for IMS Fast Path transactions. A separately licensed program that can be used to produce transit time information based on the IMS log data set. It can also be used to investigate response-time problems of IMS DB2 transactions. An optional feature of z/OS that provides system-wide information on processor utilization, I/O activity, storage, and paging. There are three basic types of RMF sessions: Monitor I, Monitor II, and Monitor III. Monitor I and Monitor II sessions collect and report data primarily about specific system activities. Monitor III sessions collect and report data about overall system activity in terms of work flow and delay. A z/OS service aid used to collect information from various z/OS subsystems. This information is dumped and reported periodically, such as once a day. Refer to Recording SMF trace data on page 1199 for more information. Formerly known as Tivoli Performance Reporter for OS/390, is a licensed program that collects SMF data into a DB2 database and allows you to create reports on the data. See Tivoli Decision Support for OS/390 on page 1203.
| | |
1193
| |
v IMS Performance Analyzer, or its equivalent, for response-time analysis and tracking all IMS-generated requests to DB2 v IMS Fast Path Log Analysis Utility (DBFULTA0) for performance reports for IMS Fast Path transactions. In addition, the DB2 IMS attachment facility allows you to use the DB2 command DISPLAY THREAD command to dynamically observe DB2 performance.
TOTAL CPU Busy DB2 & IRLM IMS/CICS QMF Users DB2 Batch & Util OTHERS SYSTEM AVAILABLE TOTAL I/Os/sec. TOTAL Paging/sec.
98.0 % 75.5 6.8 Short Transaction Medium Transaction 8.6 secs Long Transaction 15.0 secs
3.2 secs
MAJOR CHANGES: DB2 application DEST07 moved to production Figure 169. User-created system resources report
The RMF reports used to produce the information in Figure 169 were: v The RMF CPU activity report, which lists TOTAL CPU busy and the TOTAL I/Os per second. v RMF paging activity report, which lists the TOTAL paging rate per second for real storage. v The RMF work load activity report, which is used to estimate where resources are spent. Each address space or group of address spaces to be reported on separately must have different SRM reporting or performance groups. The following SRM reporting groups are considered: DB2 address spaces: DB2 database address space (ssnmDBM1) DB2 system services address space (ssnmMSTR) Distributed data facility (ssnmDIST) IRLM (IRLMPROC) IMS or CICS TSO-QMF
1194
Administration Guide
DB2 batch and utility jobs The CPU for each group is obtained using the ratio (A/B) C, where: A is the sum of CPU and service request block (SRB) service units for the specific group B is the sum of CPU and SRB service units for all the groups C is the total processor utilization. The CPU and SRB service units must have the same coefficient. You can use a similar approach for an I/O rate distribution. MAJOR CHANGES shows the important environment changes, such as: v DB2 or any related software-level change v DB2 changes in the load module for system parameters v New applications put into production v Increase in the number of DB2 QMF users v Increase in batch and utility jobs v Hardware changes MAJOR CHANGES is also useful for discovering the reason behind different monitoring results.
DB2 trace
The information under this heading, up to Recording SMF trace data on page 1199, is General-use Programming Interface and Associated Guidance Information, as defined in Notices on page 1437. DB2s instrumentation facility component (IFC) provides a trace facility that you can use to record DB2 data and events. With the IFC, however, analysis and reporting of the trace records must take place outside of DB2. You can use OMEGAMON to format, print, and interpret DB2 trace output. You can view an online snapshot from trace records by using OMEGAMON or other online monitors. For more information on OMEGAMON, see Using IBM Tivoli OMEGAMON XE on z/OS. For the exact syntax of the trace commands see Chapter 2 of DB2 Command Reference.
Appendix F. Using tools to monitor performance
1195
If you do not have OMEGAMON, or if you want to do your own analysis of the DB2 trace output, refer to Appendix D, Interpreting DB2 trace output, on page 1139. Also consider writing your own program using the instrumentation facility interface (IFI). Refer to Appendix E, Programming for the Instrumentation Facility Interface (IFI), on page 1157 for more information on using IFI. Each trace class captures information on several subsystem events. These events are identified by many instrumentation facility component identifiers (IFCIDs). The IFCIDs are described by the comments in their mapping macros, contained in prefix.SDSNMACS, which is shipped to you with DB2.
Types of traces
DB2 trace can record six types of data: statistics, accounting, audit, performance, monitor, and global. The description of the START TRACE command in Chapter 2 of DB2 Command Reference indicates which IFCIDs are activated for the different types of trace and the classes within those trace types. For details on what information each IFCID returns, see the mapping macros in prefix.SDSNMACS. The trace records are written using GTF or SMF records. See Recording SMF trace data on page 1199 and Recording GTF trace data on page 1201 before starting any traces. Trace records can also be written to storage, if you are using the monitor trace class.
Statistics trace
The statistics trace reports information about how much the DB2 system services and database services are used. It is a system-wide trace and should not be used for chargeback accounting. Use the information the statistics trace provides to plan DB2 capacity, or to tune the entire set of active DB2 programs. | Statistics trace classes 1, 3, 4, 5, and 6 are the default classes for the statistics trace if statistics is specified YES in panel DSNTIPN. If the statistics trace is started using the START TRACE command, then class 1 is the default class. v Class 1 provides information about system services and database statistics. It also includes the system parameters that were in effect when the trace was started. v Class 3 provides information about deadlocks and timeouts. v Class 4 provides information about exceptional conditions. v Class 5 provides information about data sharing. v Class 6 provides storage statistics for the DBM1 address space. If you specified YES in the SMF STATISTICS field on installation panel DSNTIPN, the statistics trace starts automatically when you start DB2, sending class 1, 3, 4 and 5 statistics data to SMF. SMF records statistics data in both SMF type 100 and 102 records. IFCIDs 0001, 0002, 0202, and 0230 are of SMF type 100. All other IFCIDs in statistics trace classes are of SMF type 102. From installation panel DSNTIPN, you can also control the statistics collection interval (STATISTICS TIME field). The statistics trace is written on an interval basis, and you can control the exact time that statistics traces are taken.
Accounting trace
The DB2 accounting trace provides information related to application programs, including such things as:
1196
Administration Guide
Start and stop times Number of commits and aborts The number of times certain SQL statements are issued Number of buffer pool requests Counts of certain locking events Processor resources consumed Thread wait times for various events RID pool processing Distributed processing Resource limit facility statistics DB2 trace begins collecting this data at successful thread allocation to DB2, and writes a completed record when the thread terminates or when the authorization ID changes. During CICS thread reuse, a change in the authid or transaction code initiates the sign-on process, which terminates the accounting interval and creates the accounting record. TXIDSO=NO eliminates the sign-on process when only the transaction code changes. When a thread is reused without initiating sign-on, several transactions are accumulated into the same accounting record, which can make it very difficult to analyze a specific transaction occurrence and correlate DB2 accounting with CICS accounting. However, applications that use ACCOUNTREC(UOW) or ACCOUNTREC(TASK) in the DBENTRY RDO definition initiate a partial sign-on, which creates an accounting record for each transaction. You can use this data to perform program-related tuning and assess and charge DB2 costs. Accounting data for class 1 (the default) is accumulated by several DB2 components during normal execution. This data is then collected at the end of the accounting period; it does not involve as much overhead as individual event tracing. On the other hand, when you start class 2, 3, 7, or 8, many additional trace points are activated. Every occurrence of these events is traced internally by DB2 trace, but these traces are not written to any external destination. Rather, the accounting facility uses these traces to compute the additional total statistics that appear in the accounting record, IFCID 003, when class 2 or class 3 is activated. Accounting class 1 must be active to externalize the information. To turn on accounting for packages and DBRMs, accounting trace classes 1 and 7 must be active. Though you can turn on class 7 while a plan is being executed, accounting trace information is only gathered for packages or DBRMs executed after class 7 is activated. Activate accounting trace class 8 with class 1 to collect information about the amount of time an agent was suspended in DB2 for each executed package. If accounting trace classes 2 and 3 are activated, there is minimal additional performance cost for activating accounting trace classes 7 and 8. If you want information from either, or both, accounting class 2 and 3, be sure to activate class 2, class 3, or both classes before your application starts. If these classes are activated during the application, the times gathered by DB2 trace are only from the time the class was activated.
1197
Accounting trace class 5 provides information on the amount of elapsed time and TCB time that an agent spent in DB2 processing instrumentation facility interface (IFI) requests. If an agent did not issue any IFI requests, these fields are not included in the accounting record. If you specified YES for SMF ACCOUNTING on installation panel DSNTIPN, the accounting trace starts automatically when you start DB2, and sends IFCIDs that are of SMF type 101 to SMF. The accounting record IFCID 0003 is of SMF type 101.
Audit trace
The audit trace collects information about DB2 security controls and is used to ensure that data access is allowed only for authorized purposes. On the CREATE TABLE or ALTER TABLE statements, you can specify whether or not a table is to be audited, and in what manner; you can also audit security information such as any access denials, grants, or revokes for the table. The default causes no auditing to take place. For descriptions of the available audit classes and the events they trace, see Audit class descriptions on page 287. If you specified YES for AUDIT TRACE on installation panel DSNTIPN, audit trace class 1 starts automatically when you start DB2. By default, DB2 will send audit data to SMF. SMF records audit data in type 102 records. When you invoke the -START TRACE command, you can also specify GTF as a destination for audit data. Chapter 13, Auditing, on page 285 describes the audit trace in detail.
Performance trace
The performance trace provides information about a variety of DB2 events, including events related to distributed data processing. You can use this information to further identify a suspected problem, or to tune DB2 programs and resources for individual users or for DB2 as a whole. You cannot automatically start collecting performance data when you install or migrate DB2. To trace performance data, you must use the -START TRACE(PERFM) command. For more information about the -START TRACE(PERFM) command, refer to Chapter 2 of DB2 Command Reference. The performance trace defaults to GTF.
Monitor trace
The monitor trace records data for online monitoring with user-written programs. This trace type has several predefined classes; those that are used explicitly for monitoring are listed here: v Class 1 (the default) allows any application program to issue an instrumentation facility interface (IFI) READS request to the IFI facility. If monitor class 1 is inactive, a READS request is denied. Activating class 1 has a minimal impact on performance. v Class 2 collects processor and elapsed time information. The information can be obtained by issuing a READS request for IFCID 0147 or 0148. In addition, monitor trace class 2 information is available in the accounting record, IFCID 0003. Monitor class 2 is equivalent to accounting class 2 and results in equivalent overhead. Monitor class 2 times appear in IFCIDs 0147, 0148, and 0003 if either monitor trace class 2 or accounting class 2 is active. v Class 3 activates DB2 wait timing and saves information about the resource causing the wait. The information can be obtained by issuing a READS request for IFCID 0147 or 0148. In addition, monitor trace class 3 information is available
1198
Administration Guide
in the accounting record, IFCID 0003. As with monitor class 2, monitor class 3 overhead is equivalent to accounting class 3 overhead. When monitor trace class 3 is active, DB2 can calculate the duration of a class 3 event, such as when an agent is suspended due to an unavailable lock. Monitor class 3 times appear in IFCIDs 0147, 0148, and 0003, if either monitor class 3 or accounting class 3 is active. v Class 5 traces the amount of time spent processing IFI requests. v Class 7 traces the amount of time an agent spent in DB2 to process each package. If monitor trace class 2 is active, activating class 7 has minimal performance impact. v Class 8 traces the amount of time an agent was suspended in DB2 for each package executed. If monitor trace class 3 is active, activating class 8 has minimal performance impact. For more information on the monitor trace, refer to Appendix E, Programming for the Instrumentation Facility Interface (IFI), on page 1157.
1199
For example, during DB2 execution, you can use the z/OS operator command SETSMF or SS to alter SMF parameters that you specified previously. The following command records statistics (record type 100), accounting (record type 101), and performance (record type 102) data to SMF. To execute this command, specify PROMPT(ALL) or PROMPT(LIST) in the SMFPRMxx member used from SYS1.PARMLIB.
SETSMF SYS(TYPE(100:102))
If you are not using measured usage licensing, do not specify type 89 records or you will incur the overhead of collecting that data. You can use the SMF program IFASMFDP to dump these records to a sequential data set. You might want to develop an application or use OMEGAMON to process these records. For a sample DB2 trace record sent to SMF, see Figure 160 on page 1141. For more information about SMF, refer to z/OS JES2 Initialization and Tuning Guide.
Activating SMF
SMF must be running before you can send data to it. To make it operational, update member SMFPRMxx of SYS1.PARMLIB, which indicates whether SMF is active and which types of records SMF accepts. For member SMFPRMxx, xx are two user-defined alphanumeric characters appended to 'SMFPRM' to form the name of an SMFPRMxx member. To update this member, specify the ACTIVE parameter and the proper TYPE subparameter for SYS and SUBSYS. You can also code an IEFU84 SMF exit to process the records that are produced.
1200
Administration Guide
BUFSP(81920) on the DEFINE CLUSTER statement for each SMF VSAM data set. These values are the minimum required for DB2; you might have to increase them, depending on your z/OS environment. DB2 runs above the 16MB line of virtual storage in a cross-memory environment.
Note: To make stopping GTF easier, you can name the GTF session when you start it. For example, you could specify S GTF.GTF,,,(TIME=YES).
If a GTF member exists in SYS1.PARMLIB, the GTF trace option USR might not be in effect. When no other member exists in SYS1.PARMLIB, you are sure to have only the USR option activated, and no other options that might add unwanted data to the GTF trace. When starting GTF, if you use the JOBNAMEP option to obtain only those trace records written for a specific job, trace records written for other agents are not written to the GTF data set. This means that a trace record that is written by a system agent that is processing for an allied agent is discarded if the JOBNAMEP option is used. For example, after a DB2 system agent performs an IDENTIFY request for an allied agent, an IFCID record is written. If the JOBNAMEP keyword is used to collect trace data for a specific job, however, the record for the IDENTIFY request is not written to GTF, even if the IDENTIFY request was performed for the job named on the JOBNAMEP keyword.
1201
You can record DB2 trace data in GTF using a GTF event ID of X'FB9'. Trace records longer than the GTF limit of 256 bytes are spanned by DB2. For instructions on how to process GTF records, refer to Appendix D, Interpreting DB2 trace output, on page 1139. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
OMEGAMON
OMEGAMON provides performance monitoring, reporting, buffer pool analysis, and a performance warehouse all in one tool: v OMEGAMON includes the function of DB2 Performance Monitor (DB2 PM), which is also available as a stand-alone product. Both products report DB2 instrumentation in a form that is easy to understand and analyze. The instrumentation data is presented in the following ways: The Batch report sets present the data you select in comprehensive reports or graphs containing system-wide and application-related information for both single DB2 subsystems and DB2 members of a data sharing group. You can combine instrumentation data from several different DB2 locations into one report. Batch reports can be used to examine performance problems and trends over a period of time. The Online Monitor gives a current snapshot view of a running DB2 subsystem, including applications that are running. Its history function displays information about subsystem and application activity in the recent past. Both a host-based and Workstation Online Monitor are provided. The Workstation Online Monitor substantially improves usability, simplifies online monitoring and problem analysis, and offers significant advantages. For example, from Workstation Online Monitor, you can launch Visual Explain so you can examine the access paths and processing methods chosen by DB2 for the currently executing SQL statement. For more information about the Workstation Online Monitor, see OMEGAMON Monitoring Performance from Performance Expert Client or OMEGAMON for z/OS and Multiplatforms Monitoring Performance from Workstation for z/OS and Multiplatforms. In addition, OMEGAMON contains a Performance Warehouse function that lets you: Save DB2 trace and report data in a performance database for further investigation and trend analysis Configure and schedule the report and load process from the workstation interface Define and apply analysis functions to identify performance bottlenecks. v OMEGAMON also includes the function of DB2 Buffer Pool Analyzer, which is also available as a stand-alone product. Both products help you optimize buffer pool usage by offering comprehensive reporting of buffer pool activity, including: Ordering by various identifiers such as buffer pool, plan, object, and primary authorization ID Sorting by getpage, sequential prefetch, and synchronous read Filtering capability
1202
Administration Guide
| | | | | | |
In addition, you can simulate buffer pool usage for varying buffer pool sizes and analyze the results of the simulation reports to determine the impact of any changes before making those changes to your current system.
1203
1204
Administration Guide
DSN_PREDICAT_TABLE
The predicate table, DSN_PREDICAT_TABLE, contains information about all of the predicates in a query. Recommendation: Do not manually insert data into or delete data from EXPLAIN tables. The data is intended to be manipulated only by the DB2 EXPLAIN function and various optimization tools.
1205
NEGATION CHAR(1) NOT LITERALS VARCHAR(128) NOT CLAUSE CHAR(8) NOT GROUP_MEMBER VARCHAR(24) ) in database-name.table-space-name CCSID UNICODE;
Column descriptions
The following table describes the columns of the DSN_PREDICAT_TABLE
Table 312. DSN_PREDICAT_TABLE description Column name QUERYNO Data type INTEGER NOT NULL Description The query number, a number used to help identify the query being explained. It is not a unique identifier. Using negative number will cause problems. The possible sources are: 1 2 3 4 QBLOCKNO APPLNAME PROGNAME PREDNO TYPE SMALLINT NOT NULL The statement line number in the program QUERYNO clause The EXPLAIN statement EDM unique token in the statement cache
The query block number, a number used to identify each query block within a query.
VARCHAR(24) NOT The application plan name. NULL VARCHAR(128) NOT NULL INTEGER NOT NULL CHAR(8) NOT NULL The program name (binding an application) or the package name (binding a package). The predicate number, a number used to identify a predicate within a query. A string used to indicate the type or the operation of the predicate. The possible values are: v AND v OR v EQUAL v RANGE v BETWEEN v IN v LIKE v NOT LIKE v EXISTS v NOTEXIST v SUBQUERY v HAVING v OTHERS
1206
Administration Guide
Table 312. DSN_PREDICAT_TABLE description (continued) Column name LEFT_HAND_SIDE Data type VARCHAR(128) NOT NULL Description If the LHS of the predicate is a table column (LHS_TABNO > 0), then this column indicates the column name. Other possible values are: v VALUE v COLEXP v NONCOLEXP v CORSUB v NONCORSUB v SUBQUERY v EXPRESSION v Blanks LEFT_HAND_SIDE VARCHAR(128) If the LHS of the predicate is a table column (LHS_TABNO > 0), then this column indicates the column name. Other possible values are: v VALUE v COLEXP v NONCOLEXP v CORSUB v NONCORSUB v SUBQUERY v EXPRESSION v Blanks LEFT_HAND_PNO INTEGER NOT NULL If the LHS of the predicate is a table column (LHS_TABNO > 0), then this column indicates the column name. Other possible values are: v VALUE v COLEXP v NONCOLEXP v CORSUB v NONCORSUB v SUBQUERY v EXPRESSION v Blanks LHS_TABNO SMALLINT NOT NULL SMALLINT NOT NULL If the LHS of the predicate is a table column, then this column indicates a number which uniquely identifies the corresponding table reference within a query. If the LHS of the predicate is a table column, then this column indicates a number which uniquely identifies the corresponding table reference within a query.
LHS_QBNO
1207
Table 312. DSN_PREDICAT_TABLE description (continued) Column name RIGHT_HAND_SIDE Data type VARCHAR(128) NOT NULL Description If the RHS of the predicate is a table column (RHS_TABNO > 0), then this column indicates the column name. Other possible values are: v VALUE v COLEXP v NONCOLEXP v CORSUB v NONCORSUB v SUBQUERY v EXPRESSION v Blanks RIGHT_HAND_PNO INTEGER NOT NULL If the predicate is a compound predicate (AND/OR), then this column indicates the second child predicate. However, this column is not reliable when the predicate tree consolidation happens. Use PARENT_PNO instead to reconstruct the predicate tree. If the RHS of the predicate is a table column, then this column indicates a number which uniquely identifies the corresponding table reference within a query. If the RHS of the predicate is a subquery, then this column indicates a number which uniquely identifies the corresponding query block within a query. The estimated filter factor. Whether this predicate can be used to determine the truth value of the whole WHERE clause. Whether this predicate can be used to determine the truth value of the whole WHERE clause. Whether this predicate can be processed by data manager (DM). If it is not, then the relational data service (RDS) needs to be used to take care of it, which is more costly. Indicates the predicate evaluation phase: A D blank ADDED_PRED CHAR(1) NOT NULL After join During join Not applicable
RHS_TABNO
CHAR(1) NOT NULL CHAR(1) NOT NULL CHAR(1) NOT NULL CHAR(1) NOT NULL CHAR(1) NOT NULL CHAR(1) NOT NULL CHAR(1) NOT NULL
RHS_QBNO
AFTER_JOIN
Whether it is generated by transitive closure, which means DB2 can generate additional predicates to provide more information for access path selection, when the set of predicates that belong to a query logically imply other predicates. Whether it is a redundant predicate, which means evaluation of other predicates in the query already determines the result that the predicate provides. Whether the predicate is direct access, which means one can navigate directly to the row through ROWID. Whether the predicate includes the index key column of the involved table. The EXPLAIN timestamp. IBM internal use only.
REDUNDANT_PRED
CHAR(1) NOT NULL CHAR(1) NOT NULL CHAR(1) NOT NULL TIMESTAMP SMALLINT
1208
Administration Guide
Table 312. DSN_PREDICAT_TABLE description (continued) Column name CATEGORY_B TEXT PRED_ENCODE PRED_CCSID PRED_MCCSID MARKER PARENT_PNO NEGATION LITERALS CLAUSE Data type SMALLINT VARCHAR(2000) CHAR(1) SMALLINT SMALLINT CHAR(1) INTEGER CHAR(1) VARCHAR(128) CHAR(8) Description IBM internal use only. The transformed predicate text; truncated if exceeds 2000 characters. IBM internal use only. IBM internal use only. IBM internal use only. Whether this predicate includes host variables, parameter markers, or special registers. The parent predicate number. If this predicate is a root predicate within a query block, then this column is 0. Whether this predicate is negated via NOT. This column indicates the literal value or literal values separated by colon symbols. The clause where the predicate exists: HAVING The HAVING clause ON The ON clause
WHERE The WHERE clause GROUP_MEMBER CHAR(8) The member name of the DB2 that executed EXPLAIN. The column is blank if the DB2 subsystem was not in a data sharing environment when EXPLAIN was executed.
DSN_STRUCT_TABLE
The structure table, DSN_STRUCT_TABLE, contains information about all of the query blocks in a query. Recommendation: Do not manually insert data into or delete data from EXPLAIN tables. The data is intended to be manipulated only by the DB2 EXPLAIN function and various optimization tools.
1209
CHAR(1) NOT NULL, CHAR(10) NOT NULL, SMALLINT NOT NULL, SMALLINT NOT NULL, CHAR(6) NOT NULL WITH DEFAULT, TIMESTAMP NOT NULL, CHAR(8) NOT NULL, VARCHAR(24) NOT NULL
Column descriptions
The following table describes the columns of DSN_STRUCT_TABLE
Table 313. DSN_STRUCT_TABLE description Column name QUERYNO Data type INTEGER NOT NULL Description The query number, a number used to help identify the query being explained. It is not a unique identifier. Using negative number will cause problems. The possible sources are: 1 2 3 4 QBLOCKNO APPLNAME PROGNAME PARENT SMALLINT NOT NULL The statement line number in the program QUERYNO clause The EXPLAIN statement EDM unique token in the statement cache
The query block number, a number used to identify each query block within a query.
VARCHAR(24) NOT The application plan name. NULL VARCHAR(128) NOT NULL SMALLINT NOT NULL FLOAT NOT NULL INTEGER NOT NULL CHAR(1) NOT NULL CHAR(10) NOT NULL The program name (binding an application) or the package name (binding a package). The parent query block number of the current query block in the structure of SQL text; this is the same as the PARENT_QBLOCKNO in PLAN_TABLE. The estimated number of rows returned by Data Manager; also the estimated number of times this query block is executed. The estimated number of rows returned by RDS (Query Cardinality). Whether the query block is moved up for do-at-open processing; Y if done-at-open; N: otherwise. This column indicates what the context of the current query block is. The possible values are: v TOP LEVEL v UNION v UNION ALL v PREDICATE v TABLE EXP v UNKNOWN
1210
Administration Guide
Table 313. DSN_STRUCT_TABLE description (continued) Column name ORDERNO DOATOPEN_PARENT Data type SMALLINT NOT NULL SMALLINT NOT NULL CHAR(6) NOT NULL WITH DEFAULT Description Not currently used. The parent query block number of the current query block; Do-at-open parent if the query block is done-at-open, this may be different from the PARENT_QBLOCKNO in PLAN_TABLE. This column indicates the type of the current query block. The possible values are v SELECT v INSERT v UPDATE v DELETE v SELUPD v DELCUR v UPDCUR v CORSUB v NCOSUB v TABLEX v TRIGGR v UNION v UNIONA v CTE It is equivalent to QBLOCK_TYPE column in PLAN_TABLE, except for CTE. EXPLAIN_TIME QUERY_STAGE GROUP_MEMBER TIMESTAMP NOT NULL CHAR(8) NOT NULL CHAR(8) NOT NULL The EXPLAIN timestamp. IBM internal use only. The member name of the DB2 subsystem that executed EXPLAIN. The column is blank if the DB2 subsystem was not in a data sharing environment when EXPLAIN was executed.
QBLOCK_TYPE
DSN_PGROUP_TABLE
The parallel group table, PGROUP_TABLE, contains information about the parallel groups in a query. Recommendation: Do not manually insert data into or delete data from EXPLAIN tables. The data is intended to be manipulated only by the DB2 EXPLAIN function and various optimization tools.
1211
CREATE TABLE userid.DSN_PGROUP_TABLE ( QUERYNO INTEGER NOT NULL, QBLOCKNO SMALLINT NOT NULL, PLANNAME VARCHAR(24) NOT NULL, COLLID VARCHAR(128) NOT NULL, PROGNAME VARCHAR(128) NOT NULL, EXPLAIN_TIME TIMESTAMP NOT NULL, VERSION VARCHAR(122) NOT NULL, GROUPID SMALLINT NOT NULL, FIRSTPLAN SMALLINT NOT NULL, LASTPLAN SMALLINT NOT NULL, CPUCOST REAL NOT NULL, IOCOST REAL NOT NULL, BESTTIME REAL NOT NULL, DEGREE SMALLINT NOT NULL, MODE CHAR(1) NOT NULL, REASON SMALLINT NOT NULL, LOCALCPU SMALLINT NOT NULL, TOTALCPU SMALLINT NOT NULL, FIRSTBASE SMALLINT, LARGETS CHAR(1), PARTKIND CHAR(1), GROUPTYPE CHAR(3), ORDER CHAR(1), STYLE CHAR(4), RANGEKIND CHAR(1), NKEYCOLS SMALLINT, LOWBOUND VARCHAR(40), HIGHBOUND VARCHAR(40), LOWKEY VARCHAR(40), HIGHKEY VARCHAR(40), FIRSTPAGE CHAR(4), LASTPAGE CHAR(4), GROUP_MEMBER VARCHAR(24) NOT NULL, HOST_REASON SMALLINT, PARA_TYPE CHAR(4), PART_INNER CHAR(1), GRNU_KEYRNG CHAR(1), OPEN_KEYRNG CHAR(1) ) IN database-name.table-space-name CCSID UNICODE;
Column descriptions
The following table describes the columns of DSN_PGROUP_TABLE
Table 314. DSN_PGROUP_TABLE description Column name QUERYNO Data type INTEGER NOT NULL Description The query number, a number used to help identify the query being explained. It is not a unique identifier. Using negative number will cause problems. The possible sources are: 1 2 3 4 The statement line number in the program QUERYNO clause The EXPLAIN statement EDM unique token in the statement cache
1212
Administration Guide
Table 314. DSN_PGROUP_TABLE description (continued) Column name QBLOCKNO PLANNAME COLLID PROGNAME EXPLAIN_TIME VERSION GROUPID FIRSTPLAN LASTPLAN CPUCOST IOCOST BESTTIME DEGREE Data type SMALLINT NOT NULL VARCHAR(24) NOT NULL VARCHAR(128) NOT NULL VARCHAR(128) NOT NULL Description The query block number, a number used to identify each query block within a query. The application plan name. The collection ID for the package. The program name (binding an application) or the package name (binding a package).
TIMESTAMP NOT NULL The explain timestamp. VARCHAR(122) NOT NULL SMALLINT NOT NULL SMALLINT NOT NULL SMALLINT NOT NULL REAL NOT NULL REAL NOT NULL REAL NOT NULL SMALLINT NOT NULL The version identifier for the package. The parallel group identifier within the current query block. The plan number of the first contributing mini-plan associated within this parallel group. The plan number of the last mini-plan associated within this parallel group. The estimated total CPU cost of this parallel group in milliseconds. The estimated total I/O cost of this parallel group in milliseconds. The estimated elapsed time for each parallel task for this parallel group. The degree of parallelism for this parallel group determined at bind time. Max parallelism degree if the Table space is large is 255, otherwise 64. The parallel mode: I IO parallelism
MODE
C CPU parallelism X multiple CPU SyspleX parallelism (highest level) N No parallelism REASON LOCALCPU TOTALCPU SMALLINT NOT NULL SMALLINT NOT NULL SMALLINT NOT NULL The reason code for downgrading parallelism mode. The number of CPUs currently online when preparing the query. The total number of CPUs in Sysplex. LOCALCPU and TOTALCPU are different only for the DB2 coordinator in a Sysplex. The table number of the table that partitioning is performed on. Y if the TableSpace is large in this group. The partitioning type: L Logical partitioning P Physical partitioning GROUPTYPE CHAR(3) Determines what operations this parallel group contains: table Access, Join, or Sort A AJ AJS
1213
Table 314. DSN_PGROUP_TABLE description (continued) Column name ORDER Data type CHAR(1) Description The ordering requirement of this parallel group : N No order. Results need no ordering. T Natural Order. Ordering is required but results already ordered if accessed via index. K Key Order. Ordering achieved by sort. Results ordered by sort key. This value applies only to parallel sort. STYLE CHAR(4) The Input/Output format style of this parallel group. Blank for IO Parallelism. For other modes: RIRO Records IN, Records OUT WIRO Work file IN, Records OUT WIWO Work file IN, Work file OUT RANGEKIND CHAR(1) The range type: K Key range P Page range NKEYCOLS SMALLINT The number of interesting key columns, that is, the number of columns that will participate in the key operation for this parallel group. The low bound of parallel group. The high bound of parallel group. The low key of range if partitioned by key range. The high key of range if partitioned by key range. The first page in range if partitioned by page range. The last page in range if partitioned by page range. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only.
LOWBOUND HIGHBOUND LOWKEY HIGHKEY FIRSTPAGE LASTPAGE GROUP_MEMBER HOST_REASON PARA_TYPE PART_INNER GRNU_KEYRNG OPEN_KEYRNG
VARCHAR(40) VARCHAR(40) VARCHAR(40) VARCHAR(40) CHAR(4) CHAR(4) CHAR(8) NOT NULL SMALLINT CHAR(4) CHAR(1) CHAR(1) CHAR(1)
DSN_PTASK_TABLE
The parallel tasks table, DSN_PTASK_TABLE, contains information about all of the parallel tasks in a query. Recommendation: Do not manually insert data into or delete data from EXPLAIN tables. The data is intended to be manipulated only by the DB2 EXPLAIN function and various optimization tools.
1214
Administration Guide
| | | | |
Important: If mixed data strings are allowed on a DB2 subsystem, EXPLAIN tables must be created with CCSID UNICODE. This includes, but is not limited to, mixed data strings that are used for tokens, SQL statements, application names, program names, correlation names, and collection IDs.
CREATE TABLE userid.DSN_PTASK_TABLE ( QUERYNO INTEGER QBLOCKNO SMALLINT PGDNO SMALLINT APPLNAME VARCHAR(24) PROGNAME VARCHAR(128) LPTNO SMALLINT KEYCOLID SMALLINT DPSI CHAR(1) LPTLOKEY VARCHAR(40) LPTHIKEY VARCHAR(40) LPTLOPAG CHAR(4) LPTHIPAG CHAR(4) LPTLOPG# CHAR(4) LPTHIPG# CHAR(4) LPTLOPT# SMALLINT LPTHIPT# SMALLINT KEYCOLDT SMALLINT KEYCOLPREC SMALLINT KEYCOLSCAL SMALLINT EXPLAIN_TIME TIMESTAMP GROUP_MEMBER VARCHAR(24) ) IN database-name.table-space-name CCSID UNICODE; NOT NOT NOT NOT NOT NOT , NOT , , , , , , , , , , , NOT NOT NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL
Column descriptions
The following table describes the columns of DSN_PTASK_TABLE.
Table 315. DSN_PTASK_TABLE description Column name QUERYNO Data type INTEGER NOT NULL Description The query number, a number used to help identify the query being explained. It is not a unique identifier. Using negative number will cause problems. The possible sources are: 1 2 3 4 QBLOCKNO APPLNAME PROGNAME LPTNO KEYCOLID SMALLINT NOT NULL The statement line number in the program QUERYNO clause The EXPLAIN statement EDM unique token in the statement cache
The query block number, a number used to identify each query block within a query.
VARCHAR(24) NOT The application plan name. NULL VARCHAR(128) NOT NULL SMALLINT NOT NULL SMALLINT The program name (binding an application) or the package name (binding a package). The parallel task number. The key column ID (KEY range only).
1215
Table 315. DSN_PTASK_TABLE description (continued) Column name DPSI LPTLOKEY LPTHIKEY LPTLOPAG LPTLHIPAG LPTLOPG# LPTHIPG# LPTLOPT# KEYCOLDT KEYCOLPREC KEYCOLSCAL EXPLAIN_TIME GROUP_MEMBER Data type CHAR(1) NOT NULL VARCHAR(40) VARCHAR(40) CHAR(4) CHAR(4) CHAR(4) CHAR(4) SMALLINT SMALLINT SMALLINT SMALLINT TIMESTAMP NOT NULL Description Indicates if a data partition secondary index (DPSI) is used. The low key value for this key column for this parallel task (KEY range only). The high key value for this key column for this parallel task (KEY range only). The low page information if partitioned by page range. The high page information if partitioned by page range. The lower bound page number for this parallel task (Page range or DPSI enabled only). The upper bound page number for this parallel task (Page range or DPSI enabled only). The lower bound partition number for this parallel task (Page range or DPSI enabled only). The data type for this key column (KEY range only). The precision/length for this key column (KEY range only). The scale for this key column (KEY range with Decimal datatype only). The EXPLAIN timestamp.
VARCHAR(24) NOT The member name of the DB2 that executed EXPLAIN. The column NULL is blank if the DB2 subsystem was not in a data sharing environment when EXPLAIN was executed.
DSN_FILTER_TABLE
The filter table, DSN_FILTER_TABLE, contains information about how predicates are used during query processing. Recommendation: Do not manually insert data into or delete data from EXPLAIN tables. The data is intended to be manipulated only by the DB2 EXPLAIN function and various optimization tools.
1216
Administration Guide
Column descriptions
The following table describes the columns of DSN_FILTER_TABLE.
Table 316. DSN_FILTER_TABLE description Column name QUERYNO Data type INTEGER NOT NULL Description The query number, a number used to help identify the query being explained. It is not a unique identifier. Using negative number will cause problems. The possible sources are: 1 2 3 4 QBLOCKNO PLANNO APPLNAME PROGNAME SMALLINT NOT NULL SMALLINT The statement line number in the program QUERYNO clause The EXPLAIN statement EDM unique token in the statement cache.
The query block number, a number used to identify each query block within a query. The plan number, a number used to identify each miniplan with a query block.
VARCHAR(24) NOT The application plan name. NULL VARCHAR(128) NOT NULL WITH DEFAULT INTEGER NOT NULL INTEGER NOT NULL INTEGER NOT NULL CHAR(9) NOT NULL The program name (binding an application) or the package name (binding a package). The collection ID for the package. The sequence number of evaluation. Indicates the order in which the predicate is applied within each stage The predicate number, a number used to identify a predicate within a query. Indicates at which stage the Predicate is evaluated. The possible values are: v Matching, v Screening, v Stage 1 v Stage 2
ORDERCLASS
1217
Table 316. DSN_FILTER_TABLE description (continued) Column name EXPLAIN_TIME MIXOPSEQ REEVAL GROUP_MEMBER Data type TIMESTAMP NOT NULL SMALLINT NOT NULL CHAR(1) NOT NULL Description The explain timestamp. IBM internal use only. IBM internal use only.
VARCHAR(24) NOT The member name of the DB2 subsystem that executed EXPLAIN. NULL The column is blank if the DB2 subsystem was not in a data sharing environment when EXPLAIN was executed.
DSN_DETCOST_TABLE
The detailed cost table, DSN_DETCOST_TABLE, contains information about detailed cost estimation of the mini-plans in a query. Recommendation: Do not manually insert data into or delete data from EXPLAIN tables. The data is intended to be manipulated only by the DB2 EXPLAIN function and various optimization tools.
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
1218
Administration Guide
IMROWCST IMPAGECST IMRIDSORT IMMERGCST IMCPU IMTOT IMSEQNO DMPREFH DMCLUDIO DMNCLUDIO DMPREDS DMSROWS DMSCANCST
FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) SMALLINT CHAR(2) FLOAT(4) FLOAT(4) INTEGER FLOAT(4) FLOAT(4)
NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
Figure 175. The CREATE TABLE statement for userid.DSN_DSN_DETCOST_TABLE_TABLE (part 1 of 3) DMCOLS DMROWS RDSROWCST DMPAGECST DMDATAIO DMDATACPU DMDATATOT RDSROW SNCOLS SNROWS SNRECSZ SNPAGES SNRUNS SNMERGES SNIOCOST SNCPUCOST SNCOST SNSCANIO SNSCANCPU SNSCANCOST SCCOLS SCROWS SCRECSZ SCPAGES SCRUNS SCMERGES SCIOCOST SCCPUCOST SCCOST SCSCANIO SCSCANCPU SCSCANCOST COMPCARD COMPIOCOST COMPCPUCOST COMPCOST JOINCOLS EXPLAIN_TIME COSTBLK COSTSTOR SMALLINT FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) SMALLINT FLOAT(4) INTEGER FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) SMALLINT FLOAT(4) INTEGER FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) FLOAT(4) SMALLINT TIMESTAMP INTEGER INTEGER NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
Figure 176. The CREATE TABLE statement for userid.DSN_DSN_DETCOST_TABLE_TABLE (part 2 of 3) MPBLK MPSTOR COMPOSITES INTEGER INTEGER INTEGER NOT NULL, NOT NULL, NOT NULL,
1219
CLIPPED INTEGER PARTITION INTEGER TABREF VARCHAR(64) MAX_COMPOSITES INTEGER MAX_STOR INTEGER MAX_CPU INTEGER MAX_ELAP INTEGER TBL_JOINED_THRESH INTEGER STOR_USED INTEGER CPU_USED INTEGER ELAPSED INTEGER MIN_CARD_KEEP FLOAT(4) MAX_CARD_KEEP FLOAT(4) MIN_COST_KEEP FLOAT(4) MAX_COST_KEEP FLOAT(4) MIN_VALUE_KEEP FLOAT(4) MIN_VALUE_CARD_KEEP FLOAT(4) MIN_VALUE_COST_KEEP FLOAT(4) MAX_VALUE_KEEP FLOAT(4) MAX_VALUE_CARD_KEEP FLOAT(4) MAX_VALUE_COST_KEEP FLOAT(4) MIN_CARD_CLIP FLOAT(4) MAX_CARD_CLIP FLOAT(4) MIN_COST_CLIP FLOAT(4) MAX_COST_CLIP FLOAT(4) MIN_VALUE_CLIP FLOAT(4) MIN_VALUE_CARD_CLIP FLOAT(4) MIN_VALUE_COST_CLIP FLOAT(4) MAX_VALUE_CLIP FLOAT(4) MAX_VALUE_CARD_CLIP FLOAT(4) MAX_VALUE_COST_CLIP FLOAT(4) GROUP_MEMBER VARCHAR(24) PSEQIOCOST FLOAT(4) PSEQCPUCOST FLOAT(4) PSEQCOST FLOAT(4) PADJIOCOST FLOAT(4) PADJCPUCOST FLOAT(4) PADJCOST FLOAT(4) IN database-name.table-space-name CCSID UNICODE;
NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL)
Column descriptions
The following table describes the columns of DSN_DETCOST_TABLE.
1220
Administration Guide
Table 317. DSN_DETCOST_TABLE description Column name QUERYNO Data type INTEGER NOT NULL Description The query number, a number used to help identify the query being explained. It is not a unique identifier. Using negative number will cause problems. The possible sources are: 1 2 3 4 QBLOCKNO SMALLINT NOT NULL The statement line number in the program QUERYNO clause The EXPLAIN statement EDM unique token in the statement cache.
The query block number, a number used to identify each query block within a query. The application plan name. The program name (binding an application) or the package name (binding a package). The plan number, a number used to identify each mini-plan with a query block. The Do-at-open IO cost for non-correlated subquery. The Do-at-open CPU cost for non-correlated subquery. The Do-at-open total cost for non-correlated subquery. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. The number of rows qualified after applying local predicates. The number of index leaf pages scanned by Data Manager. IBM internal use only. IBM internal use only. IBM internal use only. The filter factor of matching predicates only. IBM internal use only.
APPLNAME PROGNAME
PLANNO
OPENIO OPENCPU OPENCOST DMIO DMCPU DMTOT SUBQIO SUBQCOST BASEIO BASECPU BASETOT ONECOMPROWS IMLEAF IMIO IMPREFH IMMPRED IMFF IMSRPRED
FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL CHAR(2) NOT NULL INTEGER NOT NULL FLOAT(4) NOT NULL INTEGER NOT NULL
1221
Table 317. DSN_DETCOST_TABLE description (continued) Column name IMFFADJ IMSCANCST IMROWCST IMPAGECST IMRIDSORT IMMERGCST IMCPU IMTOT IMSEQNO DMPEREFH DMCLUDIO DMPREDS DMSROWS DMSCANCST DMCOLS DMROWS Data type FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL SMALLINT NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL INTEGER NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL Description The filter factor of matching and screening predicates. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. The number of data manager columns. The number of data manager rows returned (after all stage 1 predicates are applied). IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. The number of RDS rows returned (after all stage 1 and stage 2 predicates are applied). The number of columns as sort input for new table. The number of rows as sort input for new table. The record size for new table. The page size for new table. The number of runs generated for sort of new table. The number of merges needed during sort. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only.
FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL
SNCOLS SNROWS SNRECSZ SNPAGES SNRUNS SNMERGES SNIOCOST SNCPUCOST SNCOST SNCSCANIO
SMALLINT NOT NULL FLOAT(4) NOT NULL INTEGER NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL
1222
Administration Guide
Table 317. DSN_DETCOST_TABLE description (continued) Column name SNSCANCPU SNCCOLS SCROWS SCRECSZ SCPAGES SCRUNS SCMERGES SCIOCOST SCCPUCOST SCCOST SCSCANIO SCSCANCPU SCSCANCOST COMPCARD COMPIOCOST COMPCPUCOST COMPCOST JOINCOLS EXPLAIN_TIME COSTBLK COSTSTOR MPBLK MPSTOR COMPOSITES CLIPPED TABREF MAX_COMPOSITES MAX_STOR MAX_CPU MAX_ELAP TBL_JOINED_THRESH STOR_USED CPU_USED ELAPSED MIN_CARD_KEEP MAX_CARD_KEEP MIN_COST_KEEP Data type FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL SMALLINT NOT NULL TIMESTAMP NOT NULL INTEGER NOT NULL INTEGER NOT NULL INTEGER NOT NULL INTEGER NOT NULL INTEGER NOT NULL INTEGER NOT NULL VARCHAR(64) NOT NULL INTEGER NOT NULL INTEGER NOT NULL INTEGER NOT NULL INTEGER NOT NULL INTEGER NOT NULL INTEGER NOT NULL INTEGER NOT NULL INTEGER NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL Description IBM internal use only. The number of columns as sort input for Composite table. The number of rows as sort input for Composite Table. The record size for Composite table. The page size for Composite table. The number of runs generated during sort of composite. The number of merges needed during sort of composite. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. The total composite cardinality. IBM internal use only. IBM internal use only. The total cost. IBM internal use only. The EXPLAIN timestamp. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only.
1223
Table 317. DSN_DETCOST_TABLE description (continued) Column name MAX_COST_KEEP MIN_VALUE_KEEP MIN_VALUE_CARD_KEEP MIN_VALUE_COST_KEEP MIN_CARD_CLIP MAX_CARD_CLIP MIN_COST_CLIP MAX_COST_CLIP MIN_VALUE_CLIP MIN_VALUE_CARD_CLIP MIN_VALUE_COST_CLIP MAX_VALUE_CLIP MAX_VALUE_CARD_CLIP MAX_VALUE_COST_CLIP GROUP_MEMBER Data type FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL VARCHAR(240 Description IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. The member name of the DB2 subsystem that executed EXPLAIN. The column is blank if the DB2 subsystem was not in a data sharing environment when EXPLAIN was executed. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only. IBM internal use only.
FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL FLOAT(4) NOT NULL
DSN_SORT_TABLE
The sort table, DSN_SORT_TABLE, contains information about the sort operations required by a query. Recommendation: Do not manually insert data into or delete data from EXPLAIN tables. The data is intended to be manipulated only by the DB2 EXPLAIN function and various optimization tools.
1224
Administration Guide
IDs.
Figure 178. The CREATE TABLE statement for userid.DSN_SORT_TABLE CREATE TABLE userid.DSN_SORT_TABLE ( QUERYNO INTEGER NOT QBLOCKNO SMALLINT NOT PLANNO SMALLINT NOT APPLNAME VARCHAR(24) NOT PROGNAME VARCHAR(128) NOT COLLID VARCHAR(128) NOT SORTC CHAR(5) NOT SORTN CHAR(5) NOT SORTNO SMALLINT NOT KEYSIZE SMALLINT NOT ORDERCLASS INTEGER NOT EXPLAIN_TIME TIMESTAMP NOT GROUP_MEMBER VARCHAR(24) NOT ) IN database-name.table-space-name -name CCSID UNICODE;
NULL, NULL, NULL, NULL, NULL, NULL WITH DEFAULT, NULL WITH DEFAULT, NULL WITH DEFAULT, NULL, NULL, NULL, NULL, NULL
Column descriptions
The following table describes the columns of DSN_SORT_TABLE.
Table 318. DSN_SORT_TABLE description Column name QUERYNO Data type INTEGER NOT NULL Description The query number, a number used to help identify the query being explained. It is not a unique identifier. Using negative number will cause problems. The possible sources are: 1 2 3 4 QBLOCKNO PLANNO APPLNAME PROGNAME COLLID SMALLINT NOT NULL SMALLINT NOT NULL The statement line number in the program QUERYNO clause The EXPLAIN statement EDM unique token in the statement cache.
The query block number, a number used to identify each query block within a query. The plan number, a number used to identify each miniplan with a query block.
VARCHAR(24) NOT The application name. NULL VARCHAR(128) NOT NULL VARCHAR(128) NOT NULL WITH DEFAULT The program name (binding an application) or the package name (binding a package). The collection ID for the package.
1225
Table 318. DSN_SORT_TABLE description (continued) Column name SORTC Data type CHAR(5) NOT NULL WITH DEFAULT Description Indicates the reasons for sort of the Composite table. Using a bitmap of G,J.O,U. G Group By O Order By J Join
U Uniqueness SORTN CHAR(5) NOT NULL WITH DEFAULT Indicates the reasons for sort of the Composite table. Using a bitmap of G,J.O,U. G Group By O Order By J Join
U Uniqueness SORTNO KEYSIZE ORDERCLASS EXPLAIN_TIME GROUP_MEMBER SMALLINT NOT NULL SMALLINT NOT NULL INTEGER NOT NULL TIMESTAMP NOT NULL The sequence number of the sort. The sum of the lengths of the sort keys. IBM internal use only. The EXPLAIN timestamp.
VARCHAR(24) NOT The member name of the DB2 subsystem that executed EXPLAIN. NULL The column is blank if the DB2 subsystem was not in a data sharing environment when EXPLAIN was executed.
DSN_SORTKEY_TABLE
The sort key table, DSN_SORTKEY_TABLE, contains information about sort keys for all of the sorts required by a query. Recommendation: Do not manually insert data into or delete data from EXPLAIN tables. The data is intended to be manipulated only by the DB2 EXPLAIN function and various optimization tools.
1226
Administration Guide
PROGNAME COLLID SORTNO ORDERNO EXPTYPE TEXT TABNO COLNO DATATYPE LENGTH CCSID ORDERCLASS EXPLAIN_TIME GROUP_MEMBER )
VARCHAR(128) VARCHAR(128) SMALLINT SMALLINT CHAR(3) VARCHAR(128) SMALLINT SMALLINT CHAR(18) INTEGER INTEGER INTEGER TIMESTAMP VARCHAR(24)
NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT
NULL, NULL WITH DEFAULT, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL
Column descriptions
The following table describes the columns of DSN_SORTKEY_TABLE.
Table 319. DSN_SORTKEY_TABLE description Column name QUERYNO Data type INTEGER NOT NULL Description The query number, a number used to help identify the query being explained. It is not a unique identifier. Using negative number will cause problems. The possible sources are: 1 2 3 4 QBLOCKNO PLANNO APPLNAME PROGNAME COLLID SMALLINT NOT NULL SMALLINT NOT NULL The statement line number in the program QUERYNO clause The EXPLAIN statement EDM unique token in the statement cache.
The query block number, a number used to identify each query block within a query. The plan number, a number used to identify each miniplan with a query block.
VARCHAR(24) NOT The application name. NULL VARCHAR(128) NOT NULL VARCHAR(128) NOT NULL WITH DEFAULT SMALLINT NOT NULL SMALLINT NOT NULL CHAR(3) NOT NULL The program name (binding an application) or the package name (binding a package). The collection ID for the package.
The sequence number of the sort The sequence number of the sort key The type of the sort key. The possible values are: v COL v EXP v QRY
Appendix G. EXPLAIN tables that are used by optimization tools
1227
Table 319. DSN_SORTKEY_TABLE description (continued) Column name TEXT TABNO COLNO Data type VARCHAR(128) NOT NULL SMALLINT NOT NULL SMALLINT NOT NULL CHAR(18) Description The sort key text, can be a column name, an expression, or a scalar subquery, or Record ID. The table number, a number which uniquely identifies the corresponding table reference within a query. The column number, a number which uniquely identifies the corresponding column within a query. Only applicable when the sort key is a column. The data type of sort key. The possible values are v HEXADECIMAL v CHARACTER v PACKED FIELD v FIXED(31) v FIXED(15) v DATE v TIME v VARCHAR v PACKED FLD v FLOAT v TIMESTAMP v UNKNOWN DATA TYPE LENGTH CCSID ORDERCLASS EXPLAIN_TIME GROUP_MEMBER INTEGER NOT NULL INTEGER NOT NULL INTEGER NOT NULL TIMESTAMP NOT NULL The length of sort key. IBM internal use only. IBM internal use only. The EXPLAIN timestamp.
DATATYPE
VARCHAR(24) NOT The member name of the DB2 subsystem that executed EXPLAIN. NULL The column is blank if the DB2 subsystem was not in a data sharing environment when EXPLAIN was executed.
DSN_PGRANGE_TABLE
The page range table, DSN_PGRANGE_TABLE, contains information about qualified partitions for all page range scans in a query. Recommendation: Do not manually insert data into or delete data from EXPLAIN tables. The data is intended to be manipulated only by the DB2 EXPLAIN function and various optimization tools.
1228
Administration Guide
| | |
limited to, mixed data strings that are used for tokens, SQL statements, application names, program names, correlation names, and collection IDs.
CREATE TABLE userid.DSN_PGRANGE_TABLE ( QUERYNO INTEGER QBLOCKNO SMALLINT TABNO SMALLINT RANGE SMALLINT FIRSTPART SMALLINT LASTPART SMALLINT NUMPARTS SMALLINT EXPLAIN_TIME TIMESTAMP GROUP_MEMBER VARCHAR(24) ) IN database-name.table-space-name CCSID UNICODE; NOT NOT NOT NOT NOT NOT NOT NOT NOT NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL
Column descriptions
The following table describes the columns of DSN_PGRANGE_TABLE.
Table 320. DSN_PGRANGE_TABLE description Column name QUERYNO Data type INTEGER NOT NULL Description The query number, a number used to help identify the query being explained. It is not a unique identifier. Using negative number will cause problems. The possible sources are: 1 2 3 4 QBLOCKNO TABNO RANGE FIRSTPART LASTPART NUMPARTS EXPLAIN_TIME GROUP_MEMBER The statement line number in the program QUERYNO clause The EXPLAIN statement EDM unique token in the statement cache.
SMALLINT NOT NULL The query block number, a number used to identify each query block within a query. SMALLINT NOT NULL The table number, a number which uniquely identifies the corresponding table reference within a query. SMALLINT NOT NULL The sequence number of the current page range. SMALLINT NOT NULL The starting partition in the current page range. SMALLINT NOT NULL The ending partition in the current page range. SMALLINT NOT NULL The number of partitions in the current page range. SMALLINT NOT NULL The EXPLAIN timestamp. VARCHAR(24) NOT NULL The member name of the DB2 subsystem that executed EXPLAIN. The column is blank if the DB2 subsystem was not in a data sharing environment when EXPLAIN was executed.
DSN_VIEWREF_TABLE
The view reference table, DSN_VIEWREF_TABLE, contains information about all of the views and materialized query tables that are used to process a query.
1229
Recommendation: Do not manually insert data into or delete data from EXPLAIN tables. The data is intended to be manipulated only by the DB2 EXPLAIN function and various optimization tools.
Column descriptions
The following table describes the columns of DSN_VIEWREF_TABLE.
Table 321. DSN_VIEWREF_TABLE description Column name QUERYNO Data type INTEGER NOT NULL Description The query number, a number used to help identify the query being explained. It is not a unique identifier. Using negative number will cause problems. The possible sources are: 1 2 3 4 APPLNAME PROGNAME VERSION VARCHAR(24) NOT NULL VARCHAR(128) NOT NULL VARCHAR(128) The statement line number in the program QUERYNO clause The EXPLAIN statement EDM unique token in the statement cache.
The application plan name. The program name (binding an application) or the package name (binding a package). The version identifier for the package. Applies only to an embedded EXPLAIN statement executed from a package or to a statement that is explained when binding a package. Blank if not applicable.
1230
Administration Guide
Table 321. DSN_VIEWREF_TABLE description (continued) Column name COLLID Data type VARCHAR(128) Description The collection ID for the package. Applies only to an embedded EXPLAIN statement that is executed from a package or to a statement that is explained when binding a package. Blank if not applicable. The value DSNDYNAMICSQLCACHE indicates that the row is for a cached statement. Authorization ID of the owner of the object. Name of the object. The type of the object: V R M MQTUSE EXPLAIN TIME GROUP_MEMBER SMALLINT TIMESTAMP VARCHAR(24) NOT NULL View MQT that has been used to replace the base table for rewrite MQT
IBM internal use only. The EXPLAIN timestamp. The member name of the DB2 subsystem that executed EXPLAIN. The column is blank if the DB2 subsystem was not in a data sharing environment when EXPLAIN was executed.
DSN_QUERY_TABLE
The query table, DSN_QUERY_TABLE, contains information about a SQL statement, and displays the statement before and after query transformation in XML. Recommendation: Do not manually insert data into or delete data from EXPLAIN tables. The data is intended to be manipulated only by the DB2 EXPLAIN function and various optimization tools.
1231
Column descriptions
The following table describes the columns of DSN_QUERY_TABLE.
Table 322. DSN_QUERY_TABLE description Column name QUERYNO Data type INTEGER NOT NULL Description The query number, a number used to help identify the query being explained. It is not a unique identifier. Using negative number will cause problems. The possible sources are: 1 2 3 4 TYPE QUERY STAGE CHAR(8) NOT NULL CHAR(8) NOT NULL WITH DEFAULT NOT NULL CLOB(2M) TIMESTAMP The statement line number in the program QUERYNO clause The EXPLAIN statement EDM unique token in the statement cache.
The type of the data in the NODE_DATA column. The stage during query transformation when this row is populated.
The sequence number for this row if NODE_DATA exceeds the size of its column. The XML data containing the SQL statement and its query block, table, and column information. The EXPLAIN timestamp.
ROWID NOT NULL The ROWID of the statement. GENERATED ALWAYS VARCHAR(24) NOT The member name of the DB2 subsystem that executed EXPLAIN. NULL The column is blank if the DB2 subsystem was not in a data sharing environment when EXPLAIN was executed. INTEGER NOT NULL CHAR(1) NOT NULL The hash value of the contents in NODE_DATA When NODE_DATA contains an SQL statement, this column indicates if the statement contains a parameter marker literal, non-parameter marker literal, or no predicates.
GROUP MEMBER
HASHKEY HAS_PRED
1232
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
SYSIBM.DSN_VIRTUAL_INDEXES
The virtual indexes table, DSN_VIRTUAL_INDEXES, enables optimization tools to test the effect of creating and dropping indexes on the performance of particular queries.
PSPI
Recommendation: Do not manually insert data into or delete data from this table, it is intended to be used only by optimization tools.
Table 323. SYSIBM.DSN_VIRTUAL_INDEXES description Column name TBCREATOR TBNAME IXCREATOR IXNAME ENABLE Data type VARCHAR(128) VARCHAR(128) VARCHAR(128) VARCHAR(128) CHAR(1) Description The schema or authorization ID of the owner of the table on which the index is being created or dropped. The name of the table on which the index is being created or dropped. The schema or authorization ID of the owner of the index. The index name. Indicates whether this index should be considered in the scenario that is being tested. This column can have one of the following values: Y N Use this index. Do not use this index.
If this column contains Y, but the index definition is not valid, the index is ignored. MODE CHAR(1) Indicates whether the index is being created or dropped. This column can have one of the following values: C D UNIQUERULE CHAR(1) This index is to be created. This index is to be dropped.
Indicates whether the index is unique. This column can have one of the following values: D U The index is not unique. (Duplicates are allowed.) This index is unique.
COLCOUNT
SMALLINT
1233
| Table 323. SYSIBM.DSN_VIRTUAL_INDEXES description (continued) | Column name | CLUSTERING | | | | NLEAF | | NLEVELS | | INDEXTYPE | | | | PGSIZE | | FIRSTKEYCARDF | | FULLKEYCARDF | | CLUSTERRATIOF | | | | PADDED | | | | | COLNO1 | ORDERING1 | | | | COLNOn | | | ORDERINGn | | | | | | | |
SMALLINT SMALLINT CHAR(1) SMALLINT FLOAT FLOAT FLOAT INTEGER SMALLINT CHAR(1) Data type CHAR(1) Description Indicates whether the index is clustered. This column can have one of the following values: Y N The index is clustered. The index is not clustered.
The number of active leaf pages in the index. If unknown, the value is -1. The number of levels in the index tree. If unknown, the value is -1. The index type. This column can have one of the following values: 2 D The index is a nonpartitioned secondary index. The index is a data-partitioned secondary index.
The size, in kilobytes, of the leaf pages in the index. This column can have one of the following values: 4, 8, 16, or 32. The number of distinct values of the first key column. If unknown, the value is -1. The number of distinct values of the key. If unknown, the value is -1. The percentage of rows that are in clustering order. Multiply this column value by 100 to get the percent value. For example, a value of .9125 in this column indicates that 91.25%. of the rows are in clustering order. If unknown, the value is -1. Indicates whether keys within the index are padded for varying-length column data. This column can have one of the following values: Y N The keys are padded. The keys are not padded.
CHAR(1)
The column number of the first column in the index key. Indicates the order of the first column in the index key. This column can have one of the following values: A D Ascending Descending
The column number of the nth column in the index key, where n is a number between 2 and 64, including 2 and 64. If the number of index keys is less than n, this column is null. Indicates the order of the nth column in the index key, where n is a number between 2 and 64, including 2 and 64. This column can have one of the following values: A D Ascending Descending
CHAR(1)
The SQL statement to create DSN_VIRTUAL_INDEXES is in member DSNTESC of the SDSNSAMP library.
PSPI
1234
Administration Guide
The following sections provide detailed information about the real-time statistics tables: v Setting up your system for real-time statistics v Contents of the real-time statistics tables on page 1237 v Operating with real-time statistics on page 1249 For information about a DB2-supplied stored procedure that queries the real-time statistics tables, see The DB2 real-time statistics stored procedure on page 1263.
1235
| |
SYSIBM.INDEXSPACESTATS_IX
To create the real-time statistics objects, you need the authority to create tables and indexes on behalf of the SYSIBM authorization ID. DB2 inserts one row in the table for each partition or non-partitioned table space or index space. You therefore need to calculate the amount of disk space that you need for the real-time statistics tables based on the current number of table spaces and indexes in your subsystem. To determine the amount of storage that you need for the real-time statistics when they are in memory, use the following formula:
Max_concurrent_objects_updated * 152 bytes = Storage_in_bytes
1236
Administration Guide
Where Max_concurrent_objects_updated is the peak number of objects that might be updated concurrently and 152 bytes is the amount of in-memory space that DB2 uses for each object. Recommendation: Place the statistics indexes and tables in their own buffer pool. When the statistics pages are in memory, the speed at which in-memory statistics are written to the tables improves.
1237
Table 325. Descriptions of columns in the TABLESPACESTATS table (continued) Column name PARTITION Data type SMALLINT NOT NULL Description The data set number within the table space. This column is used to map a data set number in a table space to its statistics. For partitioned table spaces, this value corresponds to the partition number for a single partition. For nonpartitioned table spaces, this value is 0. The internal identifier of the database. This column is used to map a DBID to its statistics. The internal identifier of the table space page set descriptor. This column is used to map a PSID to its statistics. The timestamp when the row was inserted or last updated. This column is updated with the current timestamp when a row in the TABLESPACESTATS table is inserted or updated. You can use this column in several ways: v To determine the actions that caused the latest change to the table. Do this by selecting any of the timestamp columns and comparing them to the UPDATESTATSTIME column. v To determine whether an analysis of data is needed. This determination might be based on a given time interval, or on a combination of the time interval and the amount of activity. For example, suppose that you want to analyze statistics for the last seven days. To determine whether there has been any activity in the past seven days, check whether the difference between the current date and the UPDATESTATSTIME value is less than or equal to seven: (JULIAN_DAY(CURRENT DATE)-JULIAN_DAY(UPDATESTATSTIME))<= 7 TOTALROWS FLOAT The number of rows or LOBs in the table space or partition. If the table space contains more than one table, this value is the sum of all rows in all tables. A null value means that the number of rows is unknown or that REORG or LOAD has never been run. Use the TOTALROWS value with the value of any column that contains some affected rows to determine the percentage of rows that are affected by a particular action. NACTIVE INTEGER The number of active pages in the table space or partition. A null value means that the number of active pages is unknown. This value is equivalent to the number of preformatted pages. For multi-piece table spaces, this value is the total number of preformatted pages in all data sets. Use the NACTIVE value with the value of any column that contains some affected pages to determine the percentage of pages that are affected by a particular action. For example, suppose that your site's maintenance policies require that COPY is to be run after 20% of the pages in a table space have changed. To determine if a COPY might be required, calculate the ratio of updated pages since the last COPY to the total number of active pages. If the percentage is greater than 20, you need to run COPY: ((COPYUPDATEDPAGES*100)/NACTIVE)>20
SMALLINT NOT NULL SMALLINT NOT NULL TIMESTAMP NOT NULL WITH DEFAULT
1238
Administration Guide
Table 325. Descriptions of columns in the TABLESPACESTATS table (continued) Column name SPACE Data type INTEGER Description The amount of space, in KB, that is allocated to the table space or partition. For multi-piece linear page sets, this value is the amount of space in all data sets. A null value means the amount of space is unknown. Use this value to monitor growth and validate design assumptions. EXTENTS SMALLINT The number of extents in the table space or partition. For multi-piece table spaces, this value is the number of extents for the last data set. For a data set that is striped across multiple volumes, the value is the number of logical extents. A null value means that the number of extents is unknown. Use this value to determine: v When the primary or secondary allocation value for a table space or partition needs to be altered. v When you are approaching the maximum number of extents and risking extend failures. LOADRLASTTIME TIMESTAMP The timestamp of the last LOAD REPLACE on the table space or partition. A null value means that LOAD REPLACE has never been run on the table space or partition or that the timestamp of the last LOAD REPLACE is unknown. You can compare this timestamp to the timestamp of the last COPY on the same object to determine when a COPY is needed. If the date of the last LOAD REPLACE is more recent than the last COPY, you might need to run COPY: (JULIAN_DAY(LOADRLASTTIME)>JULIAN_DAY(COPYLASTTIME)) REORGLASTTIME TIMESTAMP The timestamp of the last REORG on the table space or partition. A null value means REORG has never been run on the table space or partition or that the timestamp of the last REORG is unknown. You can compare this timestamp to the timestamp of the last COPY on the same object to determine when a COPY is needed. If the date of the last REORG is more recent than the last COPY, you might need to run COPY: (JULIAN_DAY(REORGLASTTIME)>JULIAN_DAY(COPYLASTTIME)) REORGINSERTS INTEGER The number of records or LOBs that have been inserted since the last REORG or LOAD REPLACE on the table space or partition. A null value means that the number of inserted records or LOBs is unknown. REORGDELETES INTEGER The number of records or LOBs that have been deleted since the last REORG or LOAD REPLACE on the table space or partition. A null value means that the number of deleted records or LOBs is unknown.
1239
Table 325. Descriptions of columns in the TABLESPACESTATS table (continued) Column name REORGUPDATES Data type INTEGER Description The number of rows that have been updated since the last REORG or LOAD REPLACE on the table space or partition. This value does not include LOB updates because LOB updates are really deletions followed by insertions. A null value means that the number of updated rows is unknown. This value can be used with REORGDELETES and REORGINSERTS to determine if a REORG is necessary. For example, suppose that your site's maintenance policies require that REORG is run after 20 per cent of the rows in a table space have changed. To determine if a REORG is required, calculate the sum of updated, inserted, and deleted rows since the last REORG. Then calculate the ratio of that sum to the total number of rows. If the percentage is greater than 20, you might need to run REORG:
(((REORGINSERTS+REORGDELETES+REORGUPDATES)*100)/TOTALROWS)>20
REORGDISORGLOB
INTEGER
The number of LOBs that were inserted since the last REORG or LOAD REPLACE that are not perfectly chunked. A LOB is perfectly chunked if the allocated pages are in the minimum number of chunks. A null value means that the number of imperfectly chunked LOBs is unknown. Use this value to determine whether you need to run REORG. For example, you might want to run REORG if the ratio of REORGDISORGLOB to the total number of LOBs is greater than 10%: ((REORGDISORGLOB*100)/TOTALROWS)>10
REORGUNCLUSTINS
INTEGER
The number of records that were inserted since the last REORG or LOAD REPLACE that are not well-clustered with respect to the clustering index. A record is well-clustered if the record is inserted into a page that is within 16 pages of the ideal candidate page. The clustering index determines the ideal candidate page. A null value means that the number of badly-clustered pages is unknown. You can use this value to determine whether you need to run REORG. For example, you might want to run REORG if the following comparison is true: ((REORGUNCLUSTINS*100)/TOTALROWS)>10
REORGMASSDELETE
INTEGER
The number of mass deletes from a segmented or LOB table space, or the number of dropped tables from a segmented table space, since the last REORG or LOAD REPLACE. A null value means that the number of mass deletes is unknown. If this value is non-zero, a REORG might be necessary.
REORGNEARINDREF
INTEGER
The number of overflow records that were created since the last REORG or LOAD REPLACE and were relocated near the pointer record. For nonsegmented table spaces, a page is near the present page if the two page numbers differ by 16 or less. For segmented table spaces, a page is near the present page if the two page numbers differ by SEGSIZE*2 or less. A null value means that the number of overflow records near the pointer record is unknown.
1240
Administration Guide
Table 325. Descriptions of columns in the TABLESPACESTATS table (continued) Column name REORGFARINDEF Data type INTEGER Description The number of overflow records that were created since the last REORG or LOAD REPLACE and were relocated far from the pointer record. For nonsegmented table spaces, a page is far from the present page if the two page numbers differ by more than 16. For segmented table spaces, a page is far from the present page if the two page numbers differ by at least (SEGSIZE*2)+1. A null value means that the number of overflow records far from the pointer record is unknown. For example, in a non-data sharing environment, you might run REORG if the following comparison is true: (((REORGNEARINDREF+REORGFARINDREF)*100)/TOTALROWS)>10 In a data sharing environment, you might run REORG if the following comparison is true: (((REORGNEARINDREF+REORGFARINDREF)*100)/TOTALROWS)>5 STATSLASTTIME TIMESTAMP The timestamp of the last RUNSTATS on the table space or partition. A null value means that RUNSTATS has never been run on the table space or partition, or that the timestamp of the last RUNSTATS is unknown. You can compare this timestamp to the timestamp of the last REORG on the same object to determine when RUNSTATS is needed. If the date of the last REORG is more recent than the last RUNSTATS, you might need to run RUNSTATS: (JULIAN_DAY(REORGLASTTIME)>JULIAN_DAY(STATSLASTTIME)) STATSINSERTS INTEGER The number of records or LOBs that have been inserted since the last RUNSTATS on the table space or partition. A null value means that the number of inserted records or LOBs is unknown. STATSDELETES INTEGER The number of records or LOBs that have been deleted since the last RUNSTATS on the table space or partition. A null value means that the number of deleted records or LOBs is unknown. STATSUPDATES INTEGER The number of rows that have been updated since the last RUNSTATS on the table space or partition. This value does not include LOB updates because LOB updates are really deletions followed by insertions. A null value means that the number of updated rows is unknown. This value can be used with STATSDELETES and STATSINSERTS to determine if RUNSTATS is necessary. For example, suppose that your site's maintenance policies require that RUNSTATS is to be run after 20% of the rows in a table space have changed. To determine if RUNSTATS is required, calculate the sum of updated, inserted, and deleted rows since the last RUNSTATS. Then calculate the ratio of that sum to the total number of rows. If the percentage is greater than 20, you need to run RUNSTATS:
(((STATSINSERTS+STATSDELETES+STATSUPDATES)*100)/TOTALROWS)>20
1241
Table 325. Descriptions of columns in the TABLESPACESTATS table (continued) Column name STATSMASSDELETE Data type INTEGER Description The number of mass deletes from a segmented or LOB table space, or the number of dropped tables from a segmented table space, since the last RUNSTATS. A null value means that the number of mass deletes is unknown. If this value is non-zero, RUNSTATS might be necessary. COPYLASTTIME TIMESTAMP The timestamp of the last full or incremental image copy on the table space or partition. A null value means that COPY has never been run on the table space or partition, or that the timestamp of the last full image copy is unknown. You can compare this timestamp to the timestamp of the last REORG on the same object to determine when a COPY is needed. If the date of the last REORG is more recent than the last COPY, you might need to run COPY: (JULIAN_DAY(REORGRLASTTIME)>JULIAN_DAY(COPYLASTTIME)) COPYUPDATEDPAGES INTEGER The number of distinct pages that have been updated since the last COPY. A null value means that the number of updated pages is unknown. You can compare this value to the total number of pages to determine when a COPY is needed. For example, you might want to take an incremental image copy when 1% of the pages have changed: ((COPYUPDATEDPAGES*100)/NACTIVE)>1 You might want to take a full image copy when 20% of the pages have changed: ((COPYUPDATEDPAGES*100)/NACTIVE)>20 COPYCHANGES INTEGER The number of insert, delete, and update operations since the last COPY. A null value means that the number of insert, delete, or update operations is unknown. This number indicates the approximate number of log records that DB2 needs to process to recover to the current state. For example, you might want to take an incremental image copy when DB2 processes more than 1% of the rows from the logs: ((COPYCHANGES*100)/TOTALROWS)>1 You might want to take a full image copy when DB2 processes more than 10% of the rows from the logs: ((COPYCHANGES*100)/TOTALROWS)>10
1242
Administration Guide
Table 325. Descriptions of columns in the TABLESPACESTATS table (continued) Column name COPYUPDATELRSN Data type CHAR(6) FOR BIT DATA Description The LRSN or RBA of the first update after the last COPY. A null value means that the LRSN or RBA is unknown. Consider running COPY if this value is not in the active logs. To determine the oldest LRSN or RBA in the active logs, use the print log map utility (DSNJU004). COPYUPDATETIME TIMESTAMP The timestamp of the first update after the last COPY. A null value means that the timestamp is unknown.
Table 326 describes the columns of the INDEXSPACESTATS table and explains how you can use them in deciding when to run REORG, RUNSTATS, or COPY.
Table 326. Descriptions of columns in the INDEXSPACESTATS table Column name DBNAME NAME PARTITION Data type Description
CHAR(8) NOT NULL The name of the database. This column is used to map a database to its statistics. CHAR(8) NOT NULL The name of the index space. This column is used to map an index space to its statistics. SMALLINT NOT NULL This column is used to map a data set number in an index space to its statistics. The data set number within the index space. For partitioned index spaces, this value corresponds to the partition number for a single partition. For nonpartitioned index spaces, this value is 0.
The internal identifier of the database. This column is used to map a DBID to its statistics. The internal identifier of the index space page set descriptor. This column is used to map an ISOBID to its statistics. The internal identifier of the table space page set descriptor for the table space associated with the index that is represented by this row. This column is used to map a PSID to the statistics for the associated index.
1243
Table 326. Descriptions of columns in the INDEXSPACESTATS table (continued) Column name UPDATESTATSTIME Data type TIMESTAMP NOT NULL WITH DEFAULT Description The timestamp when the row was inserted or last updated. This column is updated with the current timestamp when a row in the INDEXSPACESTATS table is inserted or updated. You can use this column in several ways: v To determine the actions that caused the latest change to the INDEXSPACESTATS table. Do this by selecting any of the timestamp columns and comparing them to the UPDATESTATSTIME column. v To determine whether an analysis of data is needed. This determination might be based on a given time interval, or on a combination of the time interval and the amount of activity. For example, suppose that you want to analyze statistics for the last seven days. To determine whether any activity has occurred in the past seven days, check whether the difference between the current date and the UPDATESTATSTIME value is less than or equal to seven: (JULIAN_DAY(CURRENT DATE)-JULIAN_DAY(UPDATESTATSTIME))<= 7 TOTALENTRIES FLOAT The number of entries, including duplicate entries, in the index space or partition. A null value means that the number of entries is unknown, or that REORG, LOAD, or REBUILD has never been run. Use this value with the value of any column that contains a number of affected index entries to determine the percentage of index entries that are affected by a particular action. NLEVELS SMALLINT The number of levels in the index tree. A null value means that the number of levels is unknown. NACTIVE INTEGER The number of active pages in the index space or partition. This value is equivalent to the number of preformatted pages. A null value means that the number of active pages is unknown. Use this value with the value of any column that contains a number of affected pages to determine the percentage of pages that are affected by a particular action. For example, suppose that your site's maintenance policies require that COPY is to be run after 20% of the pages in an index space have changed. To determine if a COPY is required, calculate the ratio of updated pages since the last COPY to the total number of active pages. If the percentage is greater than 20, you need to run COPY: ((COPYUPDATEDPAGES*100)/NACTIVE)>20 SPACE INTEGER The amount of space, in KB, that is allocated to the index space or partition. For multi-piece linear page sets, this value is the amount of space in all data sets. A null value means that the amount of space is unknown. Use this value to monitor growth and validate design assumptions.
1244
Administration Guide
Table 326. Descriptions of columns in the INDEXSPACESTATS table (continued) Column name EXTENTS Data type SMALLINT Description The number of extents in the index space or partition. For multi-piece index spaces, this value is the number of extents for the last data set. For a data set that is striped across multiple volumes, the value is the number of logical extents. A null value means that the number of extents is unknown. Use this value to determine: v When the primary allocation value for an index space or partition needs to be altered. v When you are approaching the maximum number of extents and risking extend failures. LOADRLASTTIME TIMESTAMP The timestamp of the last LOAD REPLACE on the index space or partition. A null value means that the timestamp of the last LOAD REPLACE is unknown. If COPY YES was specified when the index was created (the value of COPY is Y in SYSIBM.SYSINDEXES), you can compare this timestamp to the timestamp of the last COPY on the same object to determine when a COPY is needed. If the date of the last LOAD REPLACE is more recent than the last COPY, you might need to run COPY: (JULIAN_DAY(LOADRLASTTIME)>JULIAN_DAY(COPYLASTTIME)) REBUILDLASTTIME TIMESTAMP The timestamp of the last REBUILD INDEX on the index space or partition. A null value means that the timestamp of the last REBUILD INDEX is unknown. If COPY YES was specified when the index was created (the value of COPY is Y in SYSIBM.SYSINDEXES), you can compare this timestamp to the timestamp of the last COPY on the same object to determine when a COPY is needed. If the date of the last REBUILD INDEX is more recent than the last COPY, you might need to run COPY: (JULIAN_DAY(REBUILDLASTTIME)>JULIAN_DAY(COPYLASTTIME)) REORGLASTTIME TIMESTAMP The timestamp of the last REORG INDEX on the index space or partition. A null value means that the timestamp of the last REORG INDEX is unknown. If COPY YES was specified when the index was created (the value of COPY is Y in SYSIBM.SYSINDEXES), you can compare this timestamp to the timestamp of the last COPY on the same object to determine when a COPY is needed. If the date of the last REORG INDEX is more recent than the last COPY, you might need to run COPY: (JULIAN_DAY(REORGLASTTIME)>JULIAN_DAY(COPYLASTTIME))
1245
Table 326. Descriptions of columns in the INDEXSPACESTATS table (continued) Column name REORGINSERTS Data type INTEGER Description The number of index entries that have been inserted since the last REORG, REBUILD INDEX, or LOAD REPLACE on the index space or partition. A null value means that the number of inserted index entries is unknown. REORGDELETES INTEGER The number of index entries that have been deleted since the last REORG, REBUILD INDEX, or LOAD REPLACE on the index space or partition. A null value means that the number of deleted index entries is unknown. This value can be used with REORGINSERTS to determine if a REORG is necessary. For example, suppose that your site's maintenance policies require that REORG is to be run after 20% of the index entries have changed. To determine if a REORG is required, calculate the sum of inserted and deleted rows since the last REORG. Then calculate the ratio of that sum to the total number of index entries. If the percentage is greater than 20, you need to run REORG: (((REORGINSERTS+REORGDELETES)*100)/TOTALENTRIES)>20 REORGAPPENDINSERT INTEGER The number of index entries that have been inserted since the last REORG, REBUILD INDEX, or LOAD REPLACE on the index space or partition that have a key value that is greater than the maximum key value in the index or partition. A null value means that the number of inserted index entries is unknown. This value can be used with REORGINSERTS to decide when to adjust the PCTFREE specification for the index. For example, if the ratio of REORGAPPENDINSERT to REORGINSERTS is greater than 10%, you might need to run ALTER INDEX to adjust PCTFREE or to run REORG more frequently: ((REORGAPPENDINSERT*100)/REORGINSERTS)>10 REORGPSEUDODELETES INTEGER The number of index entries that have been pseudo-deleted since the last REORG, REBUILD INDEX, or LOAD REPLACE on the index space or partition. A pseudo-delete is a RID entry that has been marked as deleted. A null value means that the number of pseudo-deleted index entries is unknown. This value can be used to determine if a REORG is necessary. For example, if the ratio of pseudo-deletes to total index entries is greater than 10%, you might need to run REORG: ((REORGPSEUDODELETES*100)/TOTALENTRIES)>10 REORGMASSDELETE INTEGER The number of times that an index or index space partition was mass deleted since the last REORG, REBUILD INDEX, or LOAD REPLACE. A null value means that the number of mass deletes is unknown. If this value is non-zero, a REORG might be necessary.
1246
Administration Guide
Table 326. Descriptions of columns in the INDEXSPACESTATS table (continued) Column name Data type INTEGER Description The net number of leaf pages located physically near previous pages for successive active leaf pages that occurred since the last REORG, REBUILD INDEX, or LOAD REPLACE. The distance between leaf pages is optimal if the difference is 1 and considered near if the distance is 2-16. An index page is added during a page split and the distance between the predecessor and successor pages can decrement this count if the distance between the two was near. The distance between the predecessor and new page increment the count if they are near. The distance between the new page and successor increment the count if they are near. If a leaf page is deleted the distance between the new predecessor and successor pages can increment this count if the distance between the two is near. The distance between the predecessor and the deleted page decrement the count if it was near. The distance between the successor and the deleted page decrement the count if it was near. A null value means that the value is unknown. A negative value is possible in some cases. INTEGER The net number of leaf pages located physically far away from previous leaf pages for successive active leaf pages that occurred since the last REORG, REBUILD INDEX, or LOAD REPLACE. The distance between leaf pages is optimal if the difference is 1 and considered far if the distance is greater than 16. An index page is added during a page split and the distance between the predecessor and successor pages can decrement this count if the distance between the two was far. The distance between the predecessor and new page increment the count if they are far. The distance between the new page and successor increment the count if they are far. If a leaf page is deleted the distance between the new predecessor and successor pages can increment this count if the distance between the two is far. The distance between the predecessor and the deleted page decrement the count if it was far. The distance between the successor and the deleted page decrement the count if it was far. A null value means that the value is unknown. This value can be used to decide when to run REORG. For example, calculate the ratio of relocated far leaf pages to the number of active pages. If this value is greater than 10 percent, you might need to run REORG: ((REORGLEAFFAR*100)/NACTIVE)>10
# REORGLEAFNEAR # # # # # # # # # # # # # # # # # # # REORGLEAFFAR # # # # # # # # # # # # # # # # # # # # # #
1247
Table 326. Descriptions of columns in the INDEXSPACESTATS table (continued) Column name REORGNUMLEVELS Data type INTEGER Description The number of levels in the index tree that were added or removed since the last REORG, REBUILD INDEX, or LOAD REPLACE. A null value means that the number of added or deleted levels is unknown. If this value has increased since the last REORG, REBUILD INDEX, or LOAD REPLACE, you need to check other values such as REORGPSEUDODELETES to determine whether to run REORG. If this value is less than zero, the index space contains empty pages. Running REORG can save disk space and decrease index sequential scan I/O time by eliminating those empty pages. STATSLASTTIME TIMESTAMP The timestamp of the last RUNSTATS on the index space or partition. A null value means that RUNSTATS has never been run on the index space or partition, or that the timestamp of the last RUNSTATS is unknown. You can compare this timestamp to the timestamp of the last REORG on the same object to determine when RUNSTATS is needed. If the date of the last REORG is more recent than the last RUNSTATS, you might need to run RUNSTATS: (JULIAN_DAY(REORGLASTTIME)>JULIAN_DAY(STATSLASTTIME)) STATSINSERTS INTEGER The number of index entries that have been inserted since the last RUNSTATS on the index space or partition. A null value means that the number of inserted index entries unknown. STATSDELETES INTEGER The number of index entries that have been deleted since the last RUNSTATS on the index space or partition. A null value means that the number of deleted index entries is unknown. This value can be used with STATSINSERTS to determine if RUNSTATS is necessary. For example, suppose that your site's maintenance policies require that RUNSTATS is run after 20% of the rows in an index space have changed. To determine if RUNSTATS is required, calculate the sum of inserted and deleted index entries since the last RUNSTATS. Then calculate the ratio of that sum to the total number of index entries. If the percentage is greater than 20, you need to run RUNSTATS: (((STATSINSERTS+STATSDELETES)*100)/TOTALENTRIES)>20 STATSMASSDELETE INTEGER The number of times that the index or index space partition was mass deleted since the last RUNSTATS. A null value means that the number of mass deletes is unknown. If this value is non-zero, RUNSTATS might be necessary.
1248
Administration Guide
Table 326. Descriptions of columns in the INDEXSPACESTATS table (continued) Column name COPYLASTTIME Data type TIMESTAMP Description The timestamp of the last full image copy on the index space or partition. A null value means that COPY has never been run on the index space or partition, or that the timestamp of the last full image copy is unknown. You can compare this timestamp to the timestamp of the last REORG on the same object to determine when a COPY is needed. If the date of the last REORG is more recent than the last COPY, you might need to run COPY: (JULIAN_DAY(REORGRLASTTIME)>JULIAN_DAY(COPYLASTTIME)) COPYUPDATEDPAGES INTEGER The number of distinct pages that have been updated since the last COPY. A null value means that the number of updated pages is unknown, or that the index was created with COPY NO. You can compare this value to the total number of pages to determine when a COPY is needed. For example, you might want to take a full image copy when 20% of the pages have changed: ((COPYUPDATEDPAGES*100)/NACTIVE)>20 COPYCHANGES INTEGER The number of insert or delete operations since the last COPY. A null value means that the number of insert or update operations is unknown, or that the index was created with COPY NO. This number indicates the approximate number of log records that DB2 needs to process to recover to the current state. For example, you might want to take a full image copy when DB2 processes more than 10% of the index entries from the logs: ((COPYCHANGES*100)/TOTALENTRIES)>10 COPYUPDATELRSN CHAR(6) FOR BIT DATA The LRSN or RBA of the first update after the last COPY. A null value means that the LRSN or RBA is unknown, or that the index was created with COPY NO. Consider running COPY if this value is not in the active logs. To determine the oldest LRSN or RBA in the active logs, use the print log map utility (DSNJU004). COPYUPDATETIME TIMESTAMP The timestamp of the first update after the last COPY. A null value means that the timestamp is unknown, or tthat he index was created with COPY NO.
1249
v How DB2 utilities affect the real-time statistics v How non-DB2 utilities affect real-time statistics on page 1257 v Real-time statistics on objects in work file databases and the TEMP database on page 1258 v Real-time statistics on read-only or nonmodified objects on page 1258 v How dropping objects affects real-time statistics on page 1258 v How SQL operations affect real-time statistics counters on page 1258 v Real-time statistics in data sharing on page 1259 v Improving concurrency with real-time statistics on page 1260 v Recovering the real-time statistics tables on page 1260 v Statistics accuracy on page 1260
| | |
1250
Administration Guide
Table 328 shows how running LOAD REPLACE affects the INDEXSPACESTATS statistics for an index space or physical index partition.
Table 328. Changed INDEXSPACESTATS values during LOAD REPLACE Column name TOTALENTRIES Settings for LOAD REPLACE after BUILD phase Number of index entries added1
1251
Table 328. Changed INDEXSPACESTATS values during LOAD REPLACE (continued) Column name NLEVELS NACTIVE SPACE EXTENTS LOADRLASTTIME REORGINSERTS REORGDELETES REORGAPPENDINSERT REORGPSEUDODELETES REORGMASSDELETE REORGLEAFNEAR REORGLEAFFAR REORGNUMLEVELS STATSLASTTIME STATSINSERTS STATSDELETES STATSMASSDELETE COPYLASTTIME COPYUPDATEDPAGES COPYCHANGES COPYUPDATELRSN COPYUPDATETIME Notes: 1. Under certain conditions, such as a utility restart, the LOAD utility might not have an accurate count of loaded records. In those cases, DB2 sets this value to null. 2. DB2 sets this value only if the LOAD invocation includes the STATISTICS option. 3. DB2 sets this value only if the LOAD invocation includes the COPYDDN option. Settings for LOAD REPLACE after BUILD phase Actual value Actual value Actual value Actual value Current timestamp 0 0 0 0 0 0 0 0 Current timestamp2 02 02 02 Current timestamp3 03 03 Null3 Null3
For a logical index partition: v A LOAD operation without the REPLACE option behaves similar to a SQL INSERT operation in that the number of records loaded are counted in the incremental counters such as REORGINSERTS, REORGAPPENDINSERT, STATSINSERTS, and COPYCHANGES. A LOAD operation without the REPLACE option affects the organization of the data and can be a trigger to run REORG, RUNSTATS or COPY. v DB2 does not reset the nonpartitioned index when it does a LOAD REPLACE on a partition. Therefore, DB2 does not reset the statistics for the index. The REORG counters from the last REORG are still correct. DB2 updates LOADRLASTTIME when the entire nonpartitioned index is replaced. v When DB2 does a LOAD RESUME YES on a partition, after the BUILD phase, DB2 increments TOTALENTRIES by the number of index entries that were inserted during the BUILD phase.
1252
Administration Guide
Actual value Actual value Actual value Current timestamp 0 0 0 0 0 0 0 0 Current timestamp3 03 03 03 03 Current timestamp4 0
4
Actual value Actual value Actual value Current timestamp Actual value2 Actual value2 Actual value2 Actual value2 Actual value2 Actual value2 Actual value2 Actual value2 Current timestamp3 Actual value2 Actual value2 Actual value2 Actual value2 Current timestamp Actual value2 Actual value2 Actual value5 Actual value5
04 Null4 Null4
1253
Table 330 shows how running REORG affects the INDEXSPACESTATS statistics for an index space or physical index partition.
Table 330. Changed INDEXSPACESTATS values during REORG Settings for REORG SHRLEVEL NONE after RELOAD phase Number of index entries added1 Settings for REORG SHRLEVEL REFERENCE or CHANGE after SWITCH phase For SHRLEVEL REFERENCE: Number of added index entries during BUILD phase For SHRLEVEL CHANGE: Number of added index entries during BUILD phase plus number of added index entries during LOG phase minus number of deleted index entries during LOG phase NLEVELS NACTIVE SPACE EXTENTS REORGLASTTIME REORGINSERTS REORGDELETES REORGAPPENDINSERT REORGPSEUDODELETES REORGMASSDELETE REORGLEAFNEAR REORGLEAFFAR REORGNUMLEVELS STATSLASTTIME STATSINSERTS STATSDELETES STATSMASSDELETE COPYLASTTIME COPYUPDATEDPAGES COPYCHANGES COPYUPDATELRSN COPYUPDATETIME Notes: 1. Under certain conditions, such as a utility restart, the REORG utility might not have an accurate count of loaded records. In those cases, DB2 sets this value to null. 2. This is the actual number of inserts, updates, or deletes that are due to applying the log to the shadow copy. 3. DB2 sets this value only if the REORG invocation includes the STATISTICS option. 4. DB2 sets this value only if the REORG invocation includes the COPYDDN option. 5. Inline COPY is not allowed for SHRLEVEL CHANGE or SHRLEVEL REFERENCE. Actual value Actual value Actual value Actual value Current timestamp 0 0 0 0 0 0 0 0 Current timestamp3 03 03 03 Current timestamp4 04 04 Null4 Null4 Actual value Actual value Actual value Actual value Current timestamp Actual value2 Actual value2 Actual value2 Actual value2 Actual value2 Actual value2 Actual value2 Actual value2 Current timestamp3 Actual value2 Actual value2 Actual value2 Unchanged5 Unchanged5 Unchanged5 Unchanged5 Unchanged5
1254
Administration Guide
For a logical index partition, DB2 does not reset the nonpartitioned index when it does a REORG on a partition. Therefore, DB2 does not reset the statistics for the index. The REORG counters and REORGLASTTIME are relative to the last time the entire nonpartitioned index is reorganized. In addition, the REORG counters might be low because, due to the methodology, some index entries are changed during REORG of a partition.
For a logical index partition, DB2 does not collect TOTALENTRIES statistics for the entire nonpartitioned index when it runs REBUILD INDEX. Therefore, DB2 does not reset the statistics for the index. The REORG counters from the last REORG are still correct. DB2 updates REBUILDLASTTIME when the entire nonpartitioned index is rebuilt.
1255
Table 332. Changed TABLESPACESTATS values during RUNSTATS UPDATE ALL (continued) Column name STATSINSERTS STATSDELETES STATSUPDATES STATSMASSDELETE Notes: 1. DB2 externalizes the current in-memory values. 2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL CHANGE. During UTILINIT phase Actual value
1
After RUNSTATS phase Actual value2 Actual value2 Actual value2 Actual value2
Table 333 shows how running RUNSTATS UPDATE ALL on an index affects the INDEXSPACESTATS statistics.
Table 333. Changed INDEXSPACESTATS values during RUNSTATS UPDATE ALL Column name STATSLASTTIME STATSINSERTS STATSDELETES STATSMASSDELETE Notes: 1. DB2 externalizes the current in-memory values. 2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL CHANGE. During UTILINIT phase Current timestamp Actual value1 Actual value1 Actual value1
1
After RUNSTATS phase Timestamp of the start of RUNSTATS phase Actual value2 Actual value2 Actual value2
After COPY phase Timestamp of the start of COPY phase Actual value2 Actual value2 Actual value3 Actual value3
1256
Administration Guide
Table 334. Changed TABLESPACESTATS values during COPY (continued) Column name Notes: 1. DB2 externalizes the current in-memory values. 2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL CHANGE. 3. This value is null for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL CHANGE. During UTILINIT phase After COPY phase
Table 335 shows how running COPY on an index affects the INDEXSPACESTATS statistics.
Table 335. Changed INDEXSPACESTATS values during COPY Column name COPYLASTTIME COPYUPDATEDPAGES COPYCHANGES COPYUPDATELRSN COPYUPDATETIME Note: 1. DB2 externalizes the current in-memory values. 2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL CHANGE. 3. This value is null for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL CHANGE. During UTILINIT phase Current timestamp Actual value1 Actual value1 Actual value1 Actual value1
1
After COPY phase Timestamp of the start of COPY phase Actual value2 Actual value2 Actual value3 Actual value3
| |
1257
Real-time statistics on objects in work file databases and the TEMP database
Although you cannot run utilities on objects in the work files databases and TEMP database, DB2 records the NACTIVE, SPACE, and EXTENTS statistics on table spaces in those databases. | | |
| | |
Notice that for INSERT and DELETE, the counter for the inverse operation is incremented. For example, if two INSERT statements are rolled back, the delete counter is incremented by 2.
1258
Administration Guide
UPDATE of partitioning keys: If an update to a partitioning key causes rows to move to a new partition, the following real-time statistics are impacted:
Action When UPDATE is executed When UPDATE is committed When UPDATE is rolled back Incremented counters Update count of old partition = +1 Insert count of new partition = +1 Delete count of old partition = +1 Update count of old partition = +1 (compensation log record) Delete count of new partition = +1 (remove inserted record)
If an update to a partitioning key does not cause rows to move to a new partition, the counts are accumulated as expected:
Action When UPDATE is executed Incremented counters Update count of current partition = +1 NEAR/FAR indirect reference count = +1 (if overflow occured) Update count of current partition = +1 (compensation log record)
Mass DELETE:Performing a mass delete operation on a table space does not cause DB2 to reset the counter columns in the real-time statistics tables. After a mass delete operation, the value in a counter column includes the count from a time prior to the mass delete operation, as well as the count after the mass delete operation.
1259
invalidated. If the notify process fails, the utility that resets the page set does not fail. DB2 sets the appropriate timestamp (REORGLASTTIME, STATSLASTTIME, or COPYLASTTIME) to null in the row for the empty page set to indicate that the statistics for that page set are unknown.
Statistics accuracy
In general, the real-time statistics are accurate values. However, several factors can affect the accuracy of the statistics: v Certain utility restart scenarios v Certain utility operations that leave indexes in a database restrictive state, such as RECOVER-pending (RECP) Always consider the database restrictive state of objects before accepting a utility recommendation that is based on real-time statistics. v A DB2 subsystem failure v A notify failure in a data sharing environment If you think that some statistics values might be inaccurate, you can correct the statistics by running REORG, RUNSTATS, or COPY on the objects for which DB2 generated the statistics.
1260
Administration Guide
| | | |
# # # # # #
1261
# # # # # # # # #
This stored procedure allows DB2 to invoke IMS transactions and commands easily, without maintaining their own connections to IMS. See IMS transactions stored procedure (DSNAIMS) on page 1298 for more information. v The IMS transactions stored procedure with multi-segment input support (DSNAIMS2) This stored procedure offers the same functionality as the DSNAIMS stored procedure with the addition of multi-segment input support for IMS transactions. See IMS transactions stored procedure (DSNAIMS2) on page 1302 for more information. v The EXPLAIN stored procedure (DSN8EXP) This stored procedure allows a user to perform an EXPLAIN on an SQL statement without having the authorization to execute that SQL statement.See The DB2 EXPLAIN stored procedure on page 1306for more information. v The MQ XML stored procedures All of the MQ XML stored procedures have been deprecated. These stored procedures perform the following functions:
Function Returns a message that contains an XML document from an MQ message queue, decomposes the document, and stores the data in DB2 tables that are specified by an enabled XML collection. Returns a message that contains an XML document from an MQ message queue, decomposes the document, and stores the data in DB2 tables that are specified in a document access definition (DAD) file. DXXMQSHRED does not require an enabled XML collection. Returns a message that contains an XML document from an MQ message queue, decomposes the document, and stores the data in DB2 tables that are specified by an enabled XML collection. DXXMQINSERTCLOB is intended for an XML document with a length of up to 1MB. For information, see: DB2 Application Programming and SQL Guide
# # #
# Table 336. MQ XML stored procedures # Stored procedure name # DXXMQINSERT # # # # # DXXMQSHRED # # # # # # DXXMQINSERTCLOB # # # # # # # DXXMQSHREDCLOB # # # # # # # # # DXXMQINSERTALL # # # # #
Returns a message that contains an XML DB2 Application Programming and SQL Guide document from an MQ message queue, decomposes the document, and stores the data in DB2 tables that are specified in a document access definition (DAD) file. DXXMQSHREDCLOB does not require an enabled XML collection. DXXMQSHREDCLOB is intended for an XML document with a length of up to 1MB. Returns messages that contains XML documents from an MQ message queue, decomposes the documents, and stores the data in DB2 tables that are specified by an enabled XML collection. DXXMQINSERTALL is intended for XML documents with a length of up to 3KB. DB2 Application Programming and SQL Guide
1262
Administration Guide
# Table 336. MQ XML stored procedures (continued) # Stored procedure name # DXXMQSHREDALL # # # # # # # # DXXMQSHREDALLCLOB # # # # # # # # DXXMQINSERTALLCLOB # # # # # # DXXMQGEN # # # # # # DXXMQRETRIEVE # # # # # # DXXMQGENCLOB # # # # # # DXXMQRETRIEVECLOB # # # # # # #
Function For information, see: DB2 Application Programming and Returns messages that contain XML documents from an MQ message queue, decomposes the SQL Guide documents, and stores the data in DB2 tables that are specified in a document access definition (DAD) file. DXXMQSHREDALL does not require an enabled XML collection. DXXMQSHREDALL is intended for XML documents with a length of up to 3KB. DB2 Application Programming and Returns messages that contain XML documents from an MQ message queue, decomposes the SQL Guide documents, and stores the data in DB2 tables that are specified in a document access definition (DAD) file. DXXMQSHREDALLCLOB does not require an enabled XML collection. DXXMQSHREDALLCLOB is intended for XML documents with a length of up to 1MB. Returns messages that contains XML documents from an MQ message queue, decomposes the documents, and stores the data in DB2 tables that are specified by an enabled XML collection. DXXMQINSERTALLCLOB is intended for XML documents with a length of up to 1MB. Constructs XML documents from data that is stored in DB2 tables that are specified in a document access definition (DAD) file, and sends the XML documents to an MQ message queue. DXXMQGEN is intended for XML documents with a length of up to 3KB. Constructs XML documents from data that is stored in DB2 tables that are specified in an enabled XML collection, and sends the XML documents to an MQ message queue. DXXMQRETRIEVE is intended for XML documents with a length of up to 3KB. Constructs XML documents from data that is stored in DB2 tables that are specified in a document access definition (DAD) file, and sends the XML documents to an MQ message queue. DXXMQGENCLOB is intended for XML documents with a length of up to 32KB. Constructs XML documents from data that is stored in DB2 tables that are specified in an enabled XML collection, and sends the XML documents to an MQ message queue. DXXMQRETRIEVECLOB is intended for XML documents with a length of up to 32KB. DB2 Application Programming and SQL Guide
1263
The DSNACCOR stored procedure is a sample stored procedure that makes recommendations to help you maintain your DB2 databases. In particular, DSNACCOR performs these actions: v Recommends when you should reorganize, image copy, or update statistics for table spaces or index spaces v Indicates table spaces or index spaces that have exceeded their data set v Indicates whether objects are in a restricted state DSNACCOR uses data from the SYSIBM.TABLESPACESTATS and SYSIBM.INDEXSPACESTATS real-time statistics tables to make its recommendations. DSNACCOR provides its recommendations in a result set. DSNACCOR uses the set of criteria that are shown in DSNACCOR formulas for recommending actions on page 1273 to evaluate table spaces and index spaces. By default, DSNACCOR evaluates all table spaces and index spaces in the subsystem that have entries in the real-time statistics tables. However, you can override this default through input parameters. Important information about DSNACCOR recommendations: v DSNACCOR makes recommendations based on general formulas that require input from the user about the maintenance policies for a subsystem. These recommendations might not be accurate for every installation. v If the real-time statistics tables contain information for only a small percentage of your DB2 subsystem, the recommendations that DSNACCOR makes might not be accurate for the entire subsystem. v Before you perform any action that DSNACCOR recommends, ensure that the object for which DSNACCOR makes the recommendation is available, and that the recommended action can be performed on that object. For example, before you can perform an image copy on an index, the index must have the COPY YES attribute.
1264
Administration Guide
v Ownership of the package v PACKADM authority for the package collection v SYSADM authority The owner of the package or plan that contains the CALL statement must also have: v SELECT authority on the real-time statistics tables v The DISPLAY system privilege
CALL DSNACCOR (
ICType NULL ,
CatlgSchema NULL
Criteria NULL
CRChangesPct NULL ,
CRDaySncLastCopy NULL ,
ICRUpdatedPagesPct NULL ,
CRIndexSize NULL ,
RRTInsDelUpdPct NULL ,
RRTUnclustInsPct NULL ,
RRTMassDelLimit NULL ,
RRTIndRefLimit NULL ,
RRIMassDelLimit NULL ,
1265
RUNSTATS REORG
Makes a recommendation on whether to perform RUNSTATS. Makes a recommendation on whether to perform REORG. Choosing this value causes DSNACCOR to process the EXTENTS value also. Indicates when data sets have exceeded a user-specified extents limit. Indicates which objects are in a restricted state.
EXTENTS RESTRICT
QueryType is an input parameter of type VARCHAR(40). The default is ALL. ObjectType Specifies the types of objects for which DSNACCOR recommends actions: ALL TS IX Table spaces and index spaces. Table spaces only. Index spaces only.
ObjectType is an input parameter of type VARCHAR(3). The default is ALL. ICType Specifies the types of image copies for which DSNACCOR is to make recommendations: F I B Full image copy. Incremental image copy. This value is valid for table spaces only. Full image copy or incremental image copy.
ICType is an input parameter of type VARCHAR(1). The default is B. StatsSchema Specifies the qualifier for the real-time statistics table names. StatsSchema is an input parameter of type VARCHAR(128). The default is SYSIBM. CatlgSchema Specifies the qualifier for DB2 catalog table names. CatlgSchema is an input parameter of type VARCHAR(128). The default is SYSIBM. LocalSchema Specifies the qualifier for the names of tables that DSNACCOR creates. LocalSchema is an input parameter of type VARCHAR(128). The default is DSNACC. ChkLvl Specifies the types of checking that DSNACCOR performs, and indicates whether to include objects that fail those checks in the DSNACCOR recommendations result set. This value is the sum of any combination of the following values: 0 1 DSNACCOR performs none of the following actions. For objects that are listed in the recommendations result set, check the SYSTABLESPACE or SYSINDEXES catalog tables to ensure that those objects have not been deleted. If value 16 is not also chosen, exclude rows for the deleted objects from the recommendations result set. DSNACCOR excludes objects from the recommendations result set if those objects are not in the SYSTABLESPACE or SYSINDEXES catalog tables.
| | |
1266
Administration Guide
When this setting is specified, DSNACCOR does not use EXTENTS>ExtentLimit to determine whether a LOB table space should be reorganized. 2 For index spaces that are listed in the recommendations result set, check the SYSTABLES, SYSTABLESPACE, and SYSINDEXES catalog tables to determine the name of the table space that is associated with each index space. Choosing this value causes DSNACCOR to also check for rows in the recommendations result set for objects that have been deleted but have entries in the real-time statistics tables (value 1). This means that if value 16 is not also chosen, rows for deleted objects are excluded from the recommendations result set. 4 Check whether rows that are in the DSNACCOR recommendations result set refer to objects that are in the exception table. For recommendations result set rows that have corresponding exception table rows, copy the contents of the QUERYTYPE column of the exception table to the INEXCEPTTABLE column of the recommendations result set. Check whether objects that have rows in the recommendations result set are restricted. Indicate the restricted status in the OBJECTSTATUS column of the result set. For objects that are listed in the recommendations result set, check the SYSTABLESPACE or SYSINDEXES catalog tables to ensure that those objects have not been deleted (value 1). In result set rows for deleted objects, specify the word ORPHANED in the OBJECTSTATUS column. Exclude rows from the DSNACCOR recommendations result set for index spaces for which the related table spaces have been recommended for REORG. Choosing this value causes DSNACCOR to perform the actions for values 1 and 2. For index spaces that are listed in the DSNACCOR recommendations result set, check whether the related table spaces are listed in the exception table. For recommendations result set rows that have corresponding exception table rows, copy the contents of the QUERYTYPE column of the exception table to the INEXCEPTTABLE column of the recommendations result set.
16
32
| | | | | |
64
ChkLvl is an input parameter of type INTEGER. The default is 7 (values 1+2+4). Criteria Narrows the set of objects for which DSNACCOR makes recommendations. This value is the search condition of an SQL WHERE clause. Criteria is an input parameter of type VARCHAR(4096). The default is that DSNACCOR makes recommendations for all table spaces and index spaces in the subsystem. The search condition can use any column in the result set and wildcards are allowed. Restricted A parameter that is reserved for future use. Specify the null value for this parameter. Restricted is an input parameter of type VARCHAR(80).
1267
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
CRUpdatedPagesPct Specifies a criterion for recommending a full image copy on a table space or index space. If the following condition is true for a table space, DSNACCOR recommends an image copy: The total number of distinct updated pages, divided by the total number of preformatted pages (expressed as a percentage) is greater than CRUpdatedPagesPct. See item 2 in Figure 185 on page 1274. If both of the following conditions are true for an index space, DSNACCOR recommends an image copy: v The total number of distinct updated pages, divided by the total number of preformatted pages (expressed as a percentage) is greater than CRUpdatedPagesPct. v The number of active pages in the index space or partition is greater than CRIndexSize. See items 2 and 3 in Figure 186 on page 1274. CRUpdatedPagesPct is an input parameter of type INTEGER. The default is 20. CRChangesPct Specifies a criterion for recommending a full image copy on a table space or index space. If the following condition is true for a table space, DSNACCOR recommends an image copy: The total number of insert, update, and delete operations since the last image copy, divided by the total number of rows or LOBs in a table space or partition (expressed as a percentage) is greater than CRChangesPct. See item 3 in Figure 185 on page 1274. If both of the following conditions are true for an index table space, DSNACCOR recommends an image copy: v The total number of insert and delete operations since the last image copy, divided by the total number of entries in the index space or partition (expressed as a percentage) is greater than CRChangesPct. v The number of active pages in the index space or partition is greater than CRIndexSize. See items 2 and 4 in Figure 186 on page 1274. CRChangesPct is an input parameter of type INTEGER. The default is 10. CRDaySncLastCopy Specifies a criterion for recommending a full image copy on a table space or index space. If the number of days since the last image copy is greater than this value, DSNACCOR recommends an image copy. (See item 1 in Figure 185 on page 1274 and item 1 in Figure 186 on page 1274.) CRDaySncLastCopy is an input parameter of type INTEGER. The default is 7. ICRUpdatedPagesPct Specifies a criterion for recommending an incremental image copy on a table space. If the following condition is true, DSNACCOR recommends an incremental image copy: The number of distinct pages that were updated since the last image copy, divided by the total number of active pages in the table space or partition (expressed as a percentage) is greater than CRUpdatedPagesPct. (See item 1 in Figure 187 on page 1274.) ICRUpdatedPagesPct is an input parameter of type INTEGER. The default is 1. ICRChangesPct Specifies a criterion for recommending an incremental image copy on a table space. If the following condition is true, DSNACCOR recommends an incremental image copy:
1268
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The ratio of the number of insert, update, or delete operations since the last image copy, to the total number of rows or LOBs in a table space or partition (expressed as a percentage) is greater than ICRChangesPct. (See item 2 in Figure 187 on page 1274.) ICRChangesPct is an input parameter of type INTEGER. The default is 1. CRIndexSize Specifies, when combined with CRUpdatedPagesPct or CRChangesPct, a criterion for recommending a full image copy on an index space. (See items 2, 3, and 4 in Figure 186 on page 1274.) CRIndexSize is an input parameter of type INTEGER. The default is 50. RRTInsDelUpdPct Specifies a criterion for recommending that the REORG utility is to be run on a table space. If the following condition is true, DSNACCOR recommends running REORG: The sum of insert, update, and delete operations since the last REORG, divided by the total number of rows or LOBs in the table space or partition (expressed as a percentage) is greater than RRTInsDelUpdPct (See item 1 in Figure 188 on page 1275.) RRTInsDelUpdPct is an input parameter of type INTEGER. The default is 20. RRTUnclustInsPct Specifies a criterion for recommending that the REORG utility is to be run on a table space. If the following condition is true, DSNACCOR recommends running REORG: The number of unclustered insert operations, divided by the total number of rows or LOBs in the table space or partition (expressed as a percentage) is greater than RRTUnclustInsPct. (See item 2 in Figure 188 on page 1275.) RRTUnclustInsPct is an input parameter of type INTEGER. The default is 10. RRTDisorgLOBPct Specifies a criterion for recommending that the REORG utility is to be run on a table space. If the following condition is true, DSNACCOR recommends running REORG: The number of imperfectly chunked LOBs, divided by the total number of rows or LOBs in the table space or partition (expressed as a percentage) is greater than RRTDisorgLOBPct. (See item 3 in Figure 188 on page 1275.) RRTDisorgLOBPct is an input parameter of type INTEGER. The default is 10. RRTMassDelLimit Specifies a criterion for recommending that the REORG utility is to be run on a table space. If one of the following values is greater than RRTMassDelLimit, DSNACCOR recommends running REORG: v The number of mass deletes from a segmented or LOB table space since the last REORG or LOAD REPLACE v The number of dropped tables from a nonsegmented table space since the last REORG or LOAD REPLACE (See item 5 in Figure 188 on page 1275.) RRTMassDelLimit is an input parameter of type INTEGER. The default is 0.
1269
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
RRTIndRefLimit Specifies a criterion for recommending that the REORG utility is to be run on a table space. If the following value is greater than RRTIndRefLimit, DSNACCOR recommends running REORG: The total number of overflow records that were created since the last REORG or LOAD REPLACE, divided by the total number of rows or LOBs in the table space or partition (expressed as a percentage) (See item 4 in Figure 188 on page 1275.) RRTIndRefLimit is an input parameter of type INTEGER. The default is 10. RRIInsertDeletePct Specifies a criterion for recommending that the REORG utility is to be run on an index space. If the following value is greater than RRIInsertDeletePct, DSNACCOR recommends running REORG: The sum of the number of index entries that were inserted and deleted since the last REORG, divided by the total number of index entries in the index space or partition (expressed as a percentage) (See item 1 in Figure 189 on page 1275.) This is an input parameter of type INTEGER. The default is 20. RRIAppendInsertPct Specifies a criterion for recommending that the REORG utility is to be run on an index space. If the following value is greater than RRIAppendInsertPct, DSNACCOR recommends running REORG: The number of index entries that were inserted since the last REORG, REBUILD INDEX, or LOAD REPLACE with a key value greater than the maximum key value in the index space or partition, divided by the number of index entries in the index space or partition (expressed as a percentage) (See item 2 in Figure 189 on page 1275.) RRIInsertDeletePct is an input parameter of type INTEGER. The default is 10. RRIPseudoDeletePct Specifies a criterion for recommending that the REORG utility is to be run on an index space. If the following value is greater than RRIPseudoDeletePct, DSNACCOR recommends running REORG: The number of index entries that were pseudo-deleted since the last REORG, REBUILD INDEX, or LOAD REPLACE, divided by the number of index entries in the index space or partition (expressed as a percentage) (See item 3 in Figure 189 on page 1275.) RRIPseudoDeletePct is an input parameter of type INTEGER. The default is 10. RRIMassDelLimit Specifies a criterion for recommending that the REORG utility is to be run on an index space. If the number of mass deletes from an index space or partition since the last REORG, REBUILD, or LOAD REPLACE is greater than this value, DSNACCOR recommends running REORG. (See item 4 in Figure 189 on page 1275.) RRIMassDelLimit is an input parameter of type INTEGER. The default is 0. RRILeafLimit Specifies a criterion for recommending that the REORG utility is to be run on an index space. If the following value is greater than RRILeafLimit, DSNACCOR recommends running REORG: The number of index page splits that occurred since the last REORG, REBUILD INDEX, or LOAD REPLACE in which the higher part of the split
1270
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
page was far from the location of the original page, divided by the total number of active pages in the index space or partition (expressed as a percentage) (See item 5 in Figure 189 on page 1275.) RRILeafLimit is an input parameter of type INTEGER. The default is 10. RRINumLevelsLimit Specifies a criterion for recommending that the REORG utility is to be run on an index space. If the following value is greater than RRINumLevelsLimit, DSNACCOR recommends running REORG: The number of levels in the index tree that were added or removed since the last REORG, REBUILD INDEX, or LOAD REPLACE (See item 6 in Figure 189 on page 1275.) RRINumLevelsLimit is an input parameter of type INTEGER. The default is 0. SRTInsDelUpdPct Specifies, when combined with SRTInsDelUpdAbs, a criterion for recommending that the RUNSTATS utility is to be run on a table space. If both of the following conditions are true, DSNACCOR recommends running RUNSTATS: v The number of insert, update, or delete operations since the last RUNSTATS on a table space or partition, divided by the total number of rows or LOBs in table space or partition (expressed as a percentage) is greater than SRTInsDelUpdPct. v The sum of the number of inserted and deleted index entries since the last RUNSTATS on an index space or partition is greater than SRTInsDelUpdAbs. (See items 1 and 2 in Figure 190 on page 1275.) SRTInsDelUpdPct is an input parameter of type INTEGER. The default is 20. SRTInsDelUpdAbs Specifies, when combined with SRTInsDelUpdPct, a criterion for recommending that the RUNSTATS utility is to be run on a table space. If both of the following conditions are true, DSNACCOR recommends running RUNSTATS: v The number of insert, update, and delete operations since the last RUNSTATS on a table space or partition, divided by the total number of rows or LOBs in table space or partition (expressed as a percentage) is greater than SRTInsDelUpdPct. v The sum of the number of inserted and deleted index entries since the last RUNSTATS on an index space or partition is greater than SRTInsDelUpdAbs. (See items 1 and 2 in Figure 190 on page 1275.) SRTInsDelUpdAbs is an input parameter of type INTEGER. The default is 0. SRTMassDelLimit Specifies a criterion for recommending that the RUNSTATS utility is to be run on a table space. If the following condition is true, DSNACCOR recommends running RUNSTATS: v The number of mass deletes from a table space or partition since the last REORG or LOAD REPLACE is greater than SRTMassDelLimit. (See item 3 in Figure 190 on page 1275.) SRTMassDelLimit is an input parameter of type INTEGER. The default is 0.
| |
1271
| | | | | | | | | | | | | | | | | | | | | |
that the RUNSTATS utility is to be run on an index space. If both of the following conditions are true, DSNACCOR recommends running RUNSTATS: v The number of inserted and deleted index entries since the last RUNSTATS on an index space or partition, divided by the total number of index entries in the index space or partition (expressed as a percentage) is greater than SRIInsDelUpdPct. v The sum of the number of inserted and deleted index entries since the last RUNSTATS on an index space or partition is greater than SRIInsDelUpdAbs. (See items 1 and 2 in Figure 191 on page 1275.) SRIInsDelPct is an input parameter of type INTEGER. The default is 20. SRIInsDelAbs Specifies, when combined with SRIInsDelPct, specifies a criterion for recommending that the RUNSTATS utility is to be run on an index space. If the following condition is true, DSNACCOR recommends running RUNSTATS: v The number of inserted and deleted index entries since the last RUNSTATS on an index space or partition, divided by the total number of index entries in the index space or partition (expressed as a percentage) is greater than SRIInsDelUpdPct. v The sum of the number of inserted and deleted index entries since the last RUNSTATS on an index space or partition is greater than SRIInsDelUpdAbs, (See items 1 and 2 in Figure 191 on page 1275.) SRIInsDelAbs is an input parameter of type INTEGER. The default is 0. SRIMassDelLimit Specifies a criterion for recommending that the RUNSTATS utility is to be run on an index space. If the number of mass deletes from an index space or partition since the last REORG, REBUILD INDEX, or LOAD REPLACE is greater than this value, DSNACCOR recommends running RUNSTATS. (See item 3 in Figure 191 on page 1275.) SRIMassDelLimit is an input parameter of type INTEGER. The default is 0.
# # # # # # # # #
ExtentLimit Specifies a criterion for recommending that the REORG utility is to be run on a table space or index space. Also specifies that DSNACCOR is to warn the user that the table space or index space has used too many extents. DSNACCOR recommends running REORG, and altering data set allocations if the following condition is true: v The number of physical extents in the index space, table space, or partition is greater than ExtentLimit. (See Figure 192 on page 1276.) ExtentLimit is an input parameter of type INTEGER. The default is 50. LastStatement When DSNACCOR returns a severe error (return code 12), this field contains the SQL statement that was executing when the error occurred. LastStatement is an output parameter of type VARCHAR(8012). ReturnCode The return code from DSNACCOR execution. Possible values are: 0 DSNACCOR executed successfully. The ErrorMsg parameter contains the approximate percentage of the total number of objects in the subsystem that have information in the real-time statistics tables.
1272
Administration Guide
DSNACCOR completed, but one or more input parameters might be incompatible. The ErrorMsg parameter contains the input parameters that might be incompatible. DSNACCOR terminated with errors. The ErrorMsg parameter contains a message that describes the error. DSNACCOR terminated with severe errors. The ErrorMsg parameter contains a message that describes the error. The LastStatement parameter contains the SQL statement that was executing when the error occurred. DSNACCOR terminated because it could not access one or more of the real-time statistics tables. The ErrorMsg parameter contains the names of the tables that DSNACCOR could not access. DSNACCOR terminated because it encountered a problem with one of the declared temporary tables that it defines and uses. DSNACCOR terminated because it could not define a declared temporary table. No table spaces were defined in the TEMP database.
8 12
14
15 16
NULL DSNACCOR terminated but could not set a return code. ReturnCode is an output parameter of type INTEGER. ErrorMsg Contains information about DSNACCOR execution. If DSNACCOR runs successfully (ReturnCode=0), this field contains the approximate percentage of objects in the subsystem that are in the real-time statistics tables. Otherwise, this field contains error messages. ErrorMsg is an output parameter of type VARCHAR(1331). IFCARetCode Contains the return code from an IFI COMMAND call. DSNACCOR issues commands through the IFI interface to determine the status of objects. IFCARetCode is an output parameter of type INTEGER. IFCAResCode Contains the reason code from an IFI COMMAND call. IFCAResCode is an output parameter of type INTEGER. ExcessBytes Contains the number of bytes of information that did not fit in the IFI return area after an IFI COMMAND call. ExcessBytes is an output parameter of type INTEGER.
1273
((QueryType='COPY' OR QueryType='ALL') AND (ObjectType='TS' OR ObjectType='ALL') AND ICType='F') AND (COPYLASTTIME IS NULL OR REORGLASTTIME>COPYLASTTIME OR LOADRLASTTIME>COPYLASTTIME OR (CURRENT DATE-COPYLASTTIME)>CRDaySncLastCopy OR 1 (COPYUPDATEDPAGES*100)/NACTIVE>CRUpdatedPagesPct OR 2 (COPYCHANGES*100)/TOTALROWS>CRChangesPct) 3
Figure 185. DSNACCOR formula for recommending a full image copy on a table space
Figure 186 shows the formula that DSNACCOR uses to recommend a full image copy on an index space.
((QueryType='COPY' OR QueryType='ALL') AND (ObjectType='IX' OR ObjectType='ALL') AND (ICType='F' OR ICType='B')) AND (COPYLASTTIME IS NULL OR REORGLASTTIME>COPYLASTTIME OR LOADRLASTTIME>COPYLASTTIME OR REBUILDLASTTIME>COPYLASTTIME OR (CURRENT DATE-COPYLASTTIME)>CRDaySncLastCopy OR 1 (NACTIVE>CRIndexSize AND 2 ((COPYUPDATEDPAGES*100)/NACTIVE>CRUpdatedPagesPct OR 3 (COPYCHANGES*100)/TOTALENTRIES>CRChangesPct))) 4
Figure 186. DSNACCOR formula for recommending a full image copy on an index space
Figure 187 shows the formula that DSNACCOR uses to recommend an incremental image copy on a table space.
((QueryType='COPY' OR QueryType='ALL') AND (ObjectType='TS' OR ObjectType='ALL') AND ICType='I' AND COPYLASTTIME IS NOT NULL) AND (LOADRLASTTIME>COPYLASTTIME OR REORGLASTTIME>COPYLASTTIME OR (COPYUPDATEDPAGES*100)/NACTIVE>ICRUpdatedPagesPct OR 1 (COPYCHANGES*100)/TOTALROWS>ICRChangesPct)) 2
Figure 187. DSNACCOR formula for recommending an incremental image copy on a table space
Figure 188 on page 1275 shows the formula that DSNACCOR uses to recommend a REORG on a table space. If the table space is a LOB table space, and CHCKLVL=1, the formula does not include EXTENTS>ExtentLimit.
1274
Administration Guide
((QueryType='REORG' OR QueryType='ALL') AND (ObjectType='TS' OR ObjectType='ALL')) AND (REORGLASTTIME IS NULL OR ((REORGINSERTS+REORGDELETES+REORGUPDATES)*100)/TOTALROWS>RRTInsDelUpdPct OR 1 (REORGUNCLUSTINS*100)/TOTALROWS>RRTUnclustInsPct OR 2 (REORGDISORGLOB*100)/TOTALROWS>RRTDisorgLOBPct OR 3 ((REORGNEARINDREF+REORGFARINDREF)*100)/TOTALROWS>RRTIndRefLimit OR 4 REORGMASSDELETE>RRTMassDelLimit OR 5 EXTENTS>ExtentLimit) 6
Figure 189 shows the formula that DSNACCOR uses to recommend a REORG on an index space.
((QueryType='REORG' OR QueryType='ALL') AND (ObjectType='IX' OR ObjectType='ALL')) AND (REORGLASTTIME IS NULL OR ((REORGINSERTS+REORGDELETES)*100)/TOTALENTRIES>RRIInsertDeletePct OR (REORGAPPENDINSERT*100)/TOTALENTRIES>RRIAppendInsertPct OR (REORGPSEUDODELETES*100)/TOTALENTRIES>RRIPseudoDeletePct OR REORGMASSDELETE>RRIMassDeleteLimit OR (REORGLEAFFAR*100)/NACTIVE>RRILeafLimit OR REORGNUMLEVELS>RRINumLevelsLimit OR EXTENTS>ExtentLimit)
1 2 3 4 5 6 7
Figure 190 shows the formula that DSNACCOR uses to recommend RUNSTATS on a table space.
((QueryType='RUNSTATS' OR QueryType='ALL') AND (ObjectType='TS' OR ObjectType='ALL')) AND (STATSLASTTIME IS NULL OR (((STATSINSERTS+STATSDELETES+STATSUPDATES)*100)/TOTALROWS>SRTInsDelUpdPct AND 1 (STATSINSERTS+STATSDELETES+STATSUPDATES)>SRTInsDelUpdAbs) OR 2 STATSMASSDELETE>SRTMassDeleteLimit) 3
Figure 191 shows the formula that DSNACCOR uses to recommend RUNSTATS on an index space.
((QueryType='RUNSTATS' OR QueryType='ALL') AND (ObjectType='IX' OR ObjectType='ALL')) AND (STATSLASTTIME IS NULL OR (((STATSINSERTS+STATSDELETES)*100)/TOTALENTRIES>SRIInsDelUpdPct AND (STATSINSERTS+STATSDELETES)>SRIInsDelPct) OR 2 STATSMASSDELETE>SRIInsDelAbs) 3
Figure 192 on page 1276 shows the formula that DSNACCOR uses to that too many index space or table space extents have been used.
1275
EXTENTS>ExtentLimit
Figure 192. DSNACCOR formula for warning that too many data set extents for a table space or index space are used
The meanings of the columns are: DBNAME The database name for an object in the exception table. NAME The table space name or index space name for an object in the exception table. QUERYTYPE The information that you want to place in the INEXCEPTTABLE column of the recommendations result set. If you put a null value in this column, DSNACCOR puts the value YES in the INEXCEPTTABLE column of the recommendations result set row for the object that matches the DBNAME and NAME values. Recommendation: If you plan to put many rows in the exception table, create a nonunique index on DBNAME, NAME, and QUERYTYPE. After you create the exception table, insert a row for each object for which you want to include information in the INEXCEPTTABLE column. Example: Suppose that you want the INEXCEPTTABLE column to contain the string 'IRRELEVANT for table space STAFF in database DSNDB04. You also want the INEXCEPTTABLE column to contain CURRENT for table space DSN8S81D in database DSN8D81A. Execute these INSERT statements:
INSERT INTO DSNACC.EXCEPT_TBL VALUES('DSNDB04 ', 'STAFF ', 'IRRELEVANT'); INSERT INTO DSNACC.EXCEPT_TBL VALUES('DSN8D81A', 'DSN8S81D', 'CURRENT');
To use the contents of INEXCEPTTABLE for filtering, include a condition that involves the INEXCEPTTABLE column in the search condition that you specify in your Criteria input parameter.
1276
Administration Guide
Example: Suppose that you want to include all rows for database DSNDB04 in the recommendations result set, except for those rows that contain the string IRRELEVANT in the INEXCEPTTABLE column. You might include the following search condition in your Criteria input parameter:
DBNAME='DSNDB04' AND INEXCEPTTABLE<>'IRRELEVANT'
1277
.WORKING-STORAGE SECTION. . . *********************** * DSNACCOR PARAMETERS * *********************** 01 QUERYTYPE. 49 QUERYTYPE-LN PICTURE S9(4) COMP VALUE 40. 49 QUERYTYPE-DTA PICTURE X(40) VALUE 'ALL'. 01 OBJECTTYPE. 49 OBJECTTYPE-LN PICTURE S9(4) COMP VALUE 3. 49 OBJECTTYPE-DTA PICTURE X(3) VALUE 'ALL'. 01 ICTYPE. 49 ICTYPE-LN PICTURE S9(4) COMP VALUE 1. 49 ICTYPE-DTA PICTURE X(1) VALUE 'B'. 01 STATSSCHEMA. 49 STATSSCHEMA-LN PICTURE S9(4) COMP VALUE 128. 49 STATSSCHEMA-DTA PICTURE X(128) VALUE 'SYSIBM'. 01 CATLGSCHEMA. 49 CATLGSCHEMA-LN PICTURE S9(4) COMP VALUE 128. 49 CATLGSCHEMA-DTA PICTURE X(128) VALUE 'SYSIBM'. 01 LOCALSCHEMA. 49 LOCALSCHEMA-LN PICTURE S9(4) COMP VALUE 128. 49 LOCALSCHEMA-DTA PICTURE X(128) VALUE 'DSNACC'. 01 CHKLVL PICTURE S9(9) COMP VALUE +3. 01 CRITERIA. 49 CRITERIA-LN PICTURE S9(4) COMP VALUE 4096. 49 CRITERIA-DTA PICTURE X(4096) VALUE SPACES. 01 RESTRICTED. 49 RESTRICTED-LN PICTURE S9(4) COMP VALUE 80. 49 RESTRICTED-DTA PICTURE X(80) VALUE SPACES. 01 CRUPDATEDPAGESPCT PICTURE S9(9) COMP VALUE +0. 01 CRCHANGESPCT PICTURE S9(9) COMP VALUE +0. 01 CRDAYSNCLASTCOPY PICTURE S9(9) COMP VALUE +0. 01 ICRUPDATEDPAGESPCT PICTURE S9(9) COMP VALUE +0. 01 ICRCHANGESPCT PICTURE S9(9) COMP VALUE +0. 01 CRINDEXSIZE PICTURE S9(9) COMP VALUE +0. 01 RRTINSDELUPDPCT PICTURE S9(9) COMP VALUE +0. 01 RRTUNCLUSTINSPCT PICTURE S9(9) COMP VALUE +0. 01 RRTDISORGLOBPCT PICTURE S9(9) COMP VALUE +0. 01 RRTMASSDELLIMIT PICTURE S9(9) COMP VALUE +0. 01 RRTINDREFLIMIT PICTURE S9(9) COMP VALUE +0. 01 RRIINSERTDELETEPCT PICTURE S9(9) COMP VALUE +0. 01 RRIAPPENDINSERTPCT PICTURE S9(9) COMP VALUE +0. 01 RRIPSEUDODELETEPCT PICTURE S9(9) COMP VALUE +0. 01 RRIMASSDELLIMIT PICTURE S9(9) COMP VALUE +0. 01 RRILEAFLIMIT PICTURE S9(9) COMP VALUE +0. 01 RRINUMLEVELSLIMIT PICTURE S9(9) COMP VALUE +0. 01 SRTINSDELUPDPCT PICTURE S9(9) COMP VALUE +0. 01 SRTINSDELUPDABS PICTURE S9(9) COMP VALUE +0. 01 SRTMASSDELLIMIT PICTURE S9(9) COMP VALUE +0. 01 SRIINSDELUPDPCT PICTURE S9(9) COMP VALUE +0. 01 SRIINSDELUPDABS PICTURE S9(9) COMP VALUE +0. 01 SRIMASSDELLIMIT PICTURE S9(9) COMP VALUE +0. 01 EXTENTLIMIT PICTURE S9(9) COMP VALUE +0. 01 LASTSTATEMENT. 49 LASTSTATEMENT-LN PICTURE S9(4) COMP VALUE 8012. 49 LASTSTATEMENT-DTA PICTURE X(8012) VALUE SPACES. 01 RETURNCODE PICTURE S9(9) COMP VALUE +0. 01 ERRORMSG. 49 ERRORMSG-LN PICTURE S9(4) COMP VALUE 1331. 49 ERRORMSG-DTA PICTURE X(1331) VALUE SPACES. 01 IFCARETCODE PICTURE S9(9) COMP VALUE +0. 01 IFCARESCODE PICTURE S9(9) COMP VALUE +0. 01 EXCESSBYTES PICTURE S9(9) COMP VALUE +0.
1278
Administration Guide
***************************************** * INDICATOR VARIABLES. * * INITIALIZE ALL NON-ESSENTIAL INPUT * * VARIABLES TO -1, TO INDICATE THAT THE * * INPUT VALUE IS NULL. * ***************************************** 01 QUERYTYPE-IND PICTURE S9(4) 01 OBJECTTYPE-IND PICTURE S9(4) 01 ICTYPE-IND PICTURE S9(4) 01 STATSSCHEMA-IND PICTURE S9(4) 01 CATLGSCHEMA-IND PICTURE S9(4) 01 LOCALSCHEMA-IND PICTURE S9(4) 01 CHKLVL-IND PICTURE S9(4) 01 CRITERIA-IND PICTURE S9(4) 01 RESTRICTED-IND PICTURE S9(4) 01 CRUPDATEDPAGESPCT-IND PICTURE S9(4) 01 CRCHANGESPCT-IND PICTURE S9(4) 01 CRDAYSNCLASTCOPY-IND PICTURE S9(4) 01 ICRUPDATEDPAGESPCT-IND PICTURE S9(4) 01 ICRCHANGESPCT-IND PICTURE S9(4) 01 CRINDEXSIZE-IND PICTURE S9(4) 01 RRTINSDELUPDPCT-IND PICTURE S9(4) 01 RRTUNCLUSTINSPCT-IND PICTURE S9(4) 01 RRTDISORGLOBPCT-IND PICTURE S9(4) 01 RRTMASSDELLIMIT-IND PICTURE S9(4) 01 RRTINDREFLIMIT-IND PICTURE S9(4) 01 RRIINSERTDELETEPCT-IND PICTURE S9(4) 01 RRIAPPENDINSERTPCT-IND PICTURE S9(4) 01 RRIPSEUDODELETEPCT-IND PICTURE S9(4) 01 RRIMASSDELLIMIT-IND PICTURE S9(4) 01 RRILEAFLIMIT-IND PICTURE S9(4) 01 RRINUMLEVELSLIMIT-IND PICTURE S9(4) 01 SRTINSDELUPDPCT-IND PICTURE S9(4) 01 SRTINSDELUPDABS-IND PICTURE S9(4) 01 SRTMASSDELLIMIT-IND PICTURE S9(4) 01 SRIINSDELUPDPCT-IND PICTURE S9(4) 01 SRIINSDELUPDABS-IND PICTURE S9(4) 01 SRIMASSDELLIMIT-IND PICTURE S9(4) 01 EXTENTLIMIT-IND PICTURE S9(4) 01 LASTSTATEMENT-IND PICTURE S9(4) 01 RETURNCODE-IND PICTURE S9(4) 01 ERRORMSG-IND PICTURE S9(4) 01 IFCARETCODE-IND PICTURE S9(4) 01 IFCARESCODE-IND PICTURE S9(4) 01 EXCESSBYTES-IND PICTURE S9(4)
COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4
VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE
+0. +0. +0. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. +0. +0. +0. +0. +0. +0.
.PROCEDURE DIVISION. . . ********************************************************* * SET VALUES FOR DSNACCOR INPUT PARAMETERS: * * - USE THE CHKLVL PARAMETER TO CAUSE DSNACCOR TO CHECK * * FOR ORPHANED OBJECTS AND INDEX SPACES WITHOUT * * TABLE SPACES, BUT INCLUDE THOSE OBJECTS IN THE * * RECOMMENDATIONS RESULT SET (CHKLVL=1+2+16=19) * * - USE THE CRITERIA PARAMETER TO CAUSE DSNACCOR TO * * MAKE RECOMMENDATIONS ONLY FOR OBJECTS IN DATABASES * * DSN8D81A AND DSN8D81L. *
1279
* - FOR THE FOLLOWING PARAMETERS, SET THESE VALUES, * * WHICH ARE LOWER THAN THE DEFAULTS: * * CRUPDATEDPAGESPCT 4 * * CRCHANGESPCT 2 * * RRTINSDELUPDPCT 2 * * RRTUNCLUSTINSPCT 5 * * RRTDISORGLOBPCT 5 * * RRIAPPENDINSERTPCT 5 * * SRTINSDELUPDPCT 5 * * SRIINSDELUPDPCT 5 * * EXTENTLIMIT 3 * ********************************************************* MOVE 19 TO CHKLVL. MOVE SPACES TO CRITERIA-DTA. MOVE 'DBNAME = ''DSN8D81A'' OR DBNAME = ''DSN8D81L''' TO CRITERIA-DTA. MOVE 46 TO CRITERIA-LN. MOVE 4 TO CRUPDATEDPAGESPCT. MOVE 2 TO CRCHANGESPCT. MOVE 2 TO RRTINSDELUPDPCT. MOVE 5 TO RRTUNCLUSTINSPCT. MOVE 5 TO RRTDISORGLOBPCT. MOVE 5 TO RRIAPPENDINSERTPCT. MOVE 5 TO SRTINSDELUPDPCT. MOVE 5 TO SRIINSDELUPDPCT. MOVE 3 TO EXTENTLIMIT. ******************************** * INITIALIZE OUTPUT PARAMETERS * ******************************** MOVE SPACES TO LASTSTATEMENT-DTA. MOVE 1 TO LASTSTATEMENT-LN. MOVE 0 TO RETURNCODE-O2. MOVE SPACES TO ERRORMSG-DTA. MOVE 1 TO ERRORMSG-LN. MOVE 0 TO IFCARETCODE. MOVE 0 TO IFCARESCODE. MOVE 0 TO EXCESSBYTES. ******************************************************* * SET THE INDICATOR VARIABLES TO 0 FOR NON-NULL INPUT * * PARAMETERS (PARAMETERS FOR WHICH YOU DO NOT WANT * * DSNACCOR TO USE DEFAULT VALUES) AND FOR OUTPUT * * PARAMETERS. * ******************************************************* MOVE 0 TO CHKLVL-IND. MOVE 0 TO CRITERIA-IND. MOVE 0 TO CRUPDATEDPAGESPCT-IND. MOVE 0 TO CRCHANGESPCT-IND. MOVE 0 TO RRTINSDELUPDPCT-IND. MOVE 0 TO RRTUNCLUSTINSPCT-IND. MOVE 0 TO RRTDISORGLOBPCT-IND. MOVE 0 TO RRIAPPENDINSERTPCT-IND. MOVE 0 TO SRTINSDELUPDPCT-IND. MOVE 0 TO SRIINSDELUPDPCT-IND. MOVE 0 TO EXTENTLIMIT-IND. MOVE 0 TO LASTSTATEMENT-IND. MOVE 0 TO RETURNCODE-IND. MOVE 0 TO ERRORMSG-IND. MOVE 0 TO IFCARETCODE-IND. MOVE 0 TO IFCARESCODE-IND. MOVE 0 TO EXCESSBYTES-IND. . . .
1280
Administration Guide
***************** * CALL DSNACCOR * ***************** EXEC SQL CALL SYSPROC.DSNACCOR (:QUERYTYPE :QUERYTYPE-IND, :OBJECTTYPE :OBJECTTYPE-IND, :ICTYPE :ICTYPE-IND, :STATSSCHEMA :STATSSCHEMA-IND, :CATLGSCHEMA :CATLGSCHEMA-IND, :LOCALSCHEMA :LOCALSCHEMA-IND, :CHKLVL :CHKLVL-IND, :CRITERIA :CRITERIA-IND, :RESTRICTED :RESTRICTED-IND, :CRUPDATEDPAGESPCT :CRUPDATEDPAGESPCT-IND, :CRCHANGESPCT :CRCHANGESPCT-IND, :CRDAYSNCLASTCOPY :CRDAYSNCLASTCOPY-IND, :ICRUPDATEDPAGESPCT :ICRUPDATEDPAGESPCT-IND, :ICRCHANGESPCT :ICRCHANGESPCT-IND, :CRINDEXSIZE :CRINDEXSIZE-IND, :RRTINSDELUPDPCT :RRTINSDELUPDPCT-IND, :RRTUNCLUSTINSPCT :RRTUNCLUSTINSPCT-IND, :RRTDISORGLOBPCT :RRTDISORGLOBPCT-IND, :RRTMASSDELLIMIT :RRTMASSDELLIMIT-IND, :RRTINDREFLIMIT :RRTINDREFLIMIT-IND, :RRIINSERTDELETEPCT :RRIINSERTDELETEPCT-IND, :RRIAPPENDINSERTPCT :RRIAPPENDINSERTPCT-IND, :RRIPSEUDODELETEPCT :RRIPSEUDODELETEPCT-IND, :RRIMASSDELLIMIT :RRIMASSDELLIMIT-IND, :RRILEAFLIMIT :RRILEAFLIMIT-IND, :RRINUMLEVELSLIMIT :RRINUMLEVELSLIMIT-IND, :SRTINSDELUPDPCT :SRTINSDELUPDPCT-IND, :SRTINSDELUPDABS :SRTINSDELUPDABS-IND, :SRTMASSDELLIMIT :SRTMASSDELLIMIT-IND, :SRIINSDELUPDPCT :SRIINSDELUPDPCT-IND, :SRIINSDELUPDABS :SRIINSDELUPDABS-IND, :SRIMASSDELLIMIT :SRIMASSDELLIMIT-IND, :EXTENTLIMIT :EXTENTLIMIT-IND, :LASTSTATEMENT :LASTSTATEMENT-IND, :RETURNCODE :RETURNCODE-IND, :ERRORMSG :ERRORMSG-IND, :IFCARETCODE :IFCARETCODE-IND, :IFCARESCODE :IFCARESCODE-IND, :EXCESSBYTES :EXCESSBYTES-IND) END-EXEC. ************************************************************* * ASSUME THAT THE SQL CALL RETURNED +466, WHICH MEANS THAT * * RESULT SETS WERE RETURNED. RETRIEVE RESULT SETS. * ************************************************************* * LINK EACH RESULT SET TO A LOCATOR VARIABLE EXEC SQL ASSOCIATE LOCATORS (:LOC1, :LOC2) WITH PROCEDURE SYSPROC.DSNACCOR END-EXEC. * LINK A CURSOR TO EACH RESULT SET EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :LOC1 END-EXEC. EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :LOC2 END-EXEC. * PERFORM FETCHES USING C1 TO RETRIEVE ALL ROWS FROM FIRST RESULT SET * PERFORM FETCHES USING C2 TO RETRIEVE ALL ROWS FROM SECOND RESULT SET
DSNACCOR output
If DSNACCOR executes successfully, in addition to the output parameters described in DSNACCOR option descriptions on page 1265, DSNACCOR returns two result sets. The first result set contains the results from IFI COMMAND calls that DSNACCOR makes. Table 337 on page 1282 shows the format of the first result set.
1281
Table 337. Result set row for first DSNACCOR result set Column name RS_SEQUENCE RS_DATA Data type INTEGER CHAR(80) Contents Sequence number of the output line A line of command output
The second result set contains DSNACCOR's recommendations. This result set contains one or more rows for a table space or index space. A nonpartitioned table space or nonpartitioning index space can have at most one row in the result set. A partitioned table space or partitioning index space can have at most one row for each partition. A table space, index space, or partition has a row in the result set if both of the following conditions are true: v If the Criteria input parameter contains a search condition, the search condition is true for the table space, index space, or partition. v DSNACCOR recommends at least one action for the table space, index space, or partition. Table 338 shows the columns of a result set row.
Table 338. Result set row for second DSNACCOR result set Column name DBNAME NAME PARTITION OBJECTTYPE Data type CHAR(8) CHAR(8) INTEGER CHAR(2) Description Name of the database that contains the object. Table space or index space name. Data set number or partition number. DB2 object type: v TS for a table space v IX for an index space Status of the object: v ORPHANED, if the object is an index space with no corresponding table space, or if the object does not exist v If the object is in a restricted state, one of the following values: TS=restricted-state, if OBJECTTYPE is TS IX=restricted-state, if OBJECTTYPE is IX restricted-state is one of the status codes that appear in DISPLAY DATABASE output. See Chapter 2 of DB2 Command Reference for details. v A, if the object is in an advisory state. v L, if the object is a logical partition, but not in an advisory state. v AL, if the object is a logical partition and in an advisory state. IMAGECOPY CHAR(3) COPY recommendation: v If OBJECTTYPE is TS: FUL (full image copy), INC (incremental image copy), or NO v If OBJECTTYPE is IX: YES or NO RUNSTATS recommendation: YES or NO. Indicates whether the data sets for the object have exceeded ExtentLimit: YES or NO. REORG recommendation: YES or NO.
| OBJECTSTATUS | | | | | | | | | | | | | |
CHAR(36)
1282
Administration Guide
Table 338. Result set row for second DSNACCOR result set (continued) Column name INEXCEPTTABLE Data type CHAR(40) Description A string that contains one of the following values: v Text that you specify in the QUERYTYPE column of the exception table. v YES, if you put a row in the exception table for the object that this result set row represents, but you specify NULL in the QUERYTYPE column. v NO, if the exception table exists but does not have a row for the object that this result set row represents. v Null, if the exception table does not exist, or if the ChkLvl input parameter does not include the value 4. If OBJECTTYPE is IX and the ChkLvl input parameter includes the value 2, this value is the name of the table space that is associated with the index space. Otherwise null. Timestamp of the last full image copy on the object. Null if COPY was never run, or if the last COPY execution was terminated. Timestamp of the last LOAD REPLACE on the object. Null if LOAD REPLACE was never run, or if the last LOAD REPLACE execution was terminated. Timestamp of the last REBUILD INDEX on the object. Null if REBUILD INDEX was never run, or if the last REBUILD INDEX execution was terminated. If OBJECTTYPE is TS or IX and IMAGECOPY is YES, the ratio of distinct updated pages to preformatted pages, expressed as a percentage. Otherwise null. If OBJECTTYPE is TS and IMAGECOPY is YES, the ratio of the total number insert, update, and delete operations since the last image copy to the total number of rows or LOBs in the table space or partition, expressed as a percentage. If OBJECTTYPE is IX and IMAGECOPY is YES, the ratio of the total number of insert and delete operations since the last image copy to the total number of entries in the index space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is TS or IX and IMAGECOPY is YES, the number of days since the last image copy. Otherwise null. If OBJECTTYPE is IX and IMAGECOPY is YES, the number of active pages in the index space or partition. Otherwise null. Timestamp of the last REORG on the object. Null if REORG was never run, or if the last REORG execution was terminated. If OBJECTTYPE is TS and REORG is YES, the ratio of the sum of insert, update, and delete operations since the last REORG to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is TS and REORG is YES, the ratio of the number of unclustered insert operations to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is TS and REORG is YES, the ratio of the number of imperfectly chunked LOBs to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise null.
ASSOCIATEDTS
CHAR(8)
COPYLASTTIME
TIMESTAMP
LOADRLASTTIME
TIMESTAMP
REBUILDLASTTIME
TIMESTAMP
CRUPDPGSPCT
INTEGER
CRCPYCHGPCT
INTEGER
RRTUNCINSPCT
INTEGER
RRTDISORGLOBPCT
INTEGER
1283
Table 338. Result set row for second DSNACCOR result set (continued) Column name RRTMASSDELETE Data type INTEGER Description If OBJECTTYPE is TS, REORG is YES, and the table space is a segmented table space or LOB table space, the number of mass deletes since the last REORG or LOAD REPLACE. If OBJECTTYPE is TS, REORG is YES, and the table space is nonsegmented, the number of dropped tables since the last REORG or LOAD REPLACE. Otherwise null. If OBJECTTYPE is TS, REORG is YES, the ratio of the total number of overflow records that were created since the last REORG or LOAD REPLACE to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is IX and REORG is YES, the ratio of the total number of insert and delete operations since the last REORG to the total number of index entries in the index space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is IX and REORG is YES, the ratio of the number of index entries that were inserted since the last REORG, REBUILD INDEX, or LOAD REPLACE that had a key value greater than the maximum key value in the index space or partition, to the number of index entries in the index space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is IX and REORG is YES, the ratio of the number of index entries that were pseudo-deleted (the RID entry was marked as deleted) since the last REORG, REBUILD INDEX, or LOAD REPLACE to the number of index entries in the index space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is IX and REORG is YES, the number of mass deletes from the index space or partition since the last REORG, REBUILD, or LOAD REPLACE. Otherwise null. If OBJECTTYPE is IX and REORG is YES, the ratio of the number of index page splits that occurred since the last REORG, REBUILD INDEX, or LOAD REPLACE in which the higher part of the split page was far from the location of the original page, to the total number of active pages in the index space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is IX and REORG is YES, the number of levels in the index tree that were added or removed since the last REORG, REBUILD INDEX, or LOAD REPLACE. Otherwise null. Timestamp of the last RUNSTATS on the object. Null if RUNSTATS was never run, or if the last RUNSTATS execution was terminated. If OBJECTTYPE is TS and RUNSTATS is YES, the ratio of the total number of insert, update, and delete operations since the last RUNSTATS on a table space or partition, to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is TS and RUNSTATS is YES, the total number of insert, update, and delete operations since the last RUNSTATS on a table space or partition. Otherwise null.
RRTINDREF
INTEGER
RRIINSDELPCT
INTEGER
RRIAPPINSPCT
INTEGER
RRIPSDDELPCT
INTEGER
RRIMASSDELETE
INTEGER
RRILEAF
INTEGER
RRINUMLEVELS
INTEGER
STATSLASTTIME
TIMESTAMP
| SRTINSDELUPDPCT
INTEGER
| SRTINSDELUPDABS
INTEGER
1284
Administration Guide
Table 338. Result set row for second DSNACCOR result set (continued) Column name SRTMASSDELETE Data type INTEGER Description If OBJECTTYPE is TS and RUNSTATS is YES, the number of mass deletes from the table space or partition since the last REORG or LOAD REPLACE. Otherwise null. If OBJECTTYPE is IX and RUNSTATS is YES, the ratio of the total number of insert and delete operations since the last RUNSTATS on the index space or partition, to the total number of index entries in the index space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is IX and RUNSTATS is YES, the number insert and delete operations since the last RUNSTATS on the index space or partition. Otherwise null. If OBJECTTYPE is IX and RUNSTATS is YES, the number of mass deletes from the index space or partition since the last REORG, REBUILD INDEX, or LOAD REPLACE. Otherwise, this value is null. If EXTENTS is YES, the number of physical extents in the table space, index space, or partition. Otherwise, this value is null.
SRIINSDELPCT
INTEGER
SRIINSDELABS
INTEGER
SRIMASSDELETE
INTEGER
TOTALEXTENTS
SMALLINT
1285
See Chapter 11, Controlling access to a DB2 subsystem, on page 231 for information about authorizing access to SAF resource profiles. See z/OS MVS Planning: Operations for more information about permitting access to the extended MCS console.
, status-message, return-code )
1286
Administration Guide
990 993
DSNTWR received an unexpected SQLCODE while determining the current SQLID. One of the following conditions exists: v The WLM-environment parameter value is null, blank, or contains invalid characters. v The ssid value contains invalid characters. The extended MCS console was not activated within the number of seconds indicated by message DSNT5461. DSNTWR is not running as an authorized program. DSNTWR could not activate an extended MCS console. See message DSNT533I for more information. DSNTWR made an unsuccessful request for a message from its extended MCS console. See message DSNT533I for more information. The extended MCS console for DSNTWR posted an alert. See message DSNT534I for more information. The operating system denied an authorized WLM_REFRESH request. See message DSNT545I for more information.
# #
For a complete example of setting up access to an SAF profile and calling WLM_REFRESH, see job DSNTEJ6W, which is in data set DSN810.SDSNSAMP.
1287
CALL DSNACICS (
pgm-name NULL
CICS-level NULL
mirror-trans NULL
COMMAREA-total-len NULL
, return-code, msg-area )
1288
Administration Guide
pgm-name Specifies the name of the CICS program that DSNACICS invokes. This is the name of the program that the CICS mirror transaction calls, not the CICS transaction name. This is an input parameter of type CHAR(8). CICS-applid Specifies the applid of the CICS system to which DSNACICS connects. This is an input parameter of type CHAR(8). CICS-level Specifies the level of the target CICS subsystem: 1 The CICS subsystem is CICS for MVS/ESA Version 4 Release 1, CICS Transaction Server for OS/390 Version 1 Release 1, or CICS Transaction Server for OS/390 Version 1 Release 2. The CICS subsystem is CICS Transaction Server for OS/390 Version 1 Release 3 or later.
This is an input parameter of type INTEGER. connect-type Specifies whether the CICS connection is generic or specific. Possible values are GENERIC or SPECIFIC. This is an input parameter of type CHAR(8). netname If the value of connection-type is SPECIFIC, specifies the name of the specific connection that is to be used. This value is ignored if the value of connection-type is GENERIC. This is an input parameter of type CHAR(8). mirror-trans Specifies the name of the CICS mirror transaction to invoke. This mirror transaction calls the CICS server program that is specified in the pgm-name parameter. mirror-trans must be defined to the CICS server region, and the CICS resource definition for mirror-trans must specify DFHMIRS as the program that is associated with the transaction. If this parameter contains blanks, DSNACICS passes a mirror transaction parameter value of null to the CICS EXCI interface. This allows an installation to override the transaction name in various CICS user-replaceable modules. If a CICS user exit routine does not specify a value for the mirror transaction name, CICS invokes CICS-supplied default mirror transaction CSMI. This is an input parameter of type CHAR(4). COMMAREA Specifies the communication area (COMMAREA) that is used to pass data between the DSNACICS caller and the CICS server program that DSNACICS calls. This is an input/output parameter of type VARCHAR(32704). In the length field of this parameter, specify the number of bytes that DSNACICS sends to the CICS server program. commarea-total-len Specifies the total length of the COMMAREA that the server program needs. This is an input parameter of type INTEGER. This length must be greater than or equal to the value that you specify in the length field of the COMMAREA parameter and less than or equal to 32704. When the CICS server program completes, DSNACICS passes the server program's entire COMMAREA, which is commarea-total-len bytes in length, to the stored procedure caller.
1289
sync-opts Specifies whether the calling program controls resource recovery, using two-phase commit protocols that are supported by RRS. Possible values are: 1 The client program controls commit processing. The CICS server region does not perform a syncpoint when the server program returns control to CICS. Also, the server program cannot take any explicit syncpoints. Doing so causes the server program to abnormally terminate. The target CICS server region takes a syncpoint on successful completion of the server program. If this value is specified, the server program can take explicit syncpoints.
When CICS has been set up to be an RRS resource manager, the client application can control commit processing using SQL COMMIT requests. DB2 UDB for z/OS ensures that CICS is notified to commit any resources that the CICS server program modifies during two-phase commit processing. When CICS has not been set up to be an RRS resource manager, CICS forces syncpoint processing of all CICS resources at completion of the CICS server program. This commit processing is not coordinated with the commit processing of the client program. This option is ignored when CICS-level is 1. This is an input parameter of type INTEGER. return-code Return code from the stored procedure. Possible values are: 0 12 The call completed successfully. The request to run the CICS server program failed. The msg-area parameter contains messages that describe the error.
This is an output parameter of type INTEGER. msg-area Contains messages if an error occurs during stored procedure execution. The first messages in this area are generated by the stored procedure. Messages that are generated by CICS or the DSNACICX user exit routine might follow the first messages. The messages appear as a series of concatenated, viewable text strings. This is an output parameter of type VARCHAR(500).
1290
Administration Guide
v The load module for DSNACICX must reside in an authorized program library that is in the STEPLIB concatenation of the stored procedure address space startup procedure. You can replace the default DSNACICX in the prefix.SDSNLOAD, library, or you can put the DSNACICX load module in a library that is ahead of prefix.SDSNLOAD in the STEPLIB concatenation. It is recommended that you put DSNACICX in the prefix.SDSNEXIT library. Sample installation job DSNTIJEX contains JCL for assembling and link-editing the sample source code for DSNACICX into prefix.SDSNEXIT. You need to modify the JCL for the libraries and the compiler that you are using. v The load module must be named DSNACICX. v The exit routine must save and restore the callers registers. Only the contents of register 15 can be modified. v It must be written to be reentrant and link-edited as reentrant. v It must be written and link-edited to execute as AMODE(31),RMODE(ANY). v DSNACICX can contain SQL statements. However, if it does, you need to change the DSNACICS procedure definition to reflect the appropriate SQL access level for the types of SQL statements that you use in the user exit routine.
Table 340 on page 1292 shows the contents of the DSNACICX exit parameter list, XPL. Member DSNDXPL in data set prefix.SDSNMACS contains an assembler language mapping macro for XPL. Sample exit routine DSNASCIO in data set prefix.SDSNSAMP includes a COBOL mapping macro for XPL.
1291
Table 340. Contents of the XPL exit parameter list Corresponding DSNACICS parameter
Name XPL_EYEC XPL_LEN XPL_LEVEL XPL_PGMNAME XPL_CICSAPPLID XPL_CICSLEVEL XPL_CONNECTTYPE XPL_NETNAME XPL_MIRRORTRAN
Hex offset Data type 0 4 8 C 14 1C 20 28 30 Character, 4 bytes Character, 4 bytes 4-byte integer Character, 8 bytes Character, 8 bytes 4-byte integer Character, 8 bytes Character, 8 bytes Character, 8 bytes
Description Eye-catcher: 'XPL ' Length of the exit parameter list Level of the parameter list Name of the CICS server program CICS VTAM applid Level of CICS code Specific or generic connection to CICS
Name of the specific connection netname to CICS Name of the mirror transaction that invokes the CICS server program Address of the COMMAREA Length of the COMMAREA that is passed to the server program Total length of the COMMAREA that is returned to the caller Syncpoint control option Return code from the exit routine Length of the output message area Output message area mirror-trans
XPL_COMMAREAPTR XPL_COMMINLEN
38 3C
1 2
XPL_COMMTOTLEN
40
4byte integer
commarea-total-len
44 48 4C 50
1. The area that this field points to is specified by DSNACICS parameter COMMAREA. This area does not include the length bytes. 2. This is the same value that the DSNACICS caller specifies in the length bytes of the COMMAREA parameter. 3. Although the total length of msg-area is 500 bytes, DSNACICX can use only 256 bytes of that area.
1292
Administration Guide
NETNAME CHAR(8); MIRROR_TRANS CHAR(4); COMMAREA_TOTAL_LEN BIN FIXED(31); SYNC_OPTS BIN FIXED(31); RET_CODE BIN FIXED(31); MSG_AREA CHAR(500) VARYING;
DECLARE 1 COMMAREA BASED(P1), 3 COMMAREA_LEN BIN FIXED(15), 3 COMMAREA_INPUT CHAR(30), 3 COMMAREA_OUTPUT CHAR(100); /***********************************************/ /* INDICATOR VARIABLES FOR DSNACICS PARAMETERS */ /***********************************************/ DECLARE 1 IND_VARS, 3 IND_PARM_LEVEL BIN FIXED(15), 3 IND_PGM_NAME BIN FIXED(15), 3 IND_CICS_APPLID BIN FIXED(15), 3 IND_CICS_LEVEL BIN FIXED(15), 3 IND_CONNECT_TYPE BIN FIXED(15), 3 IND_NETNAME BIN FIXED(15), 3 IND_MIRROR_TRANS BIN FIXED(15), 3 IND_COMMAREA BIN FIXED(15), 3 IND_COMMAREA_TOTAL_LEN BIN FIXED(15), 3 IND_SYNC_OPTS BIN FIXED(15), 3 IND_RETCODE BIN FIXED(15), 3 IND_MSG_AREA BIN FIXED(15); /**************************/ /* LOCAL COPY OF COMMAREA */ /**************************/ DECLARE P1 POINTER; DECLARE COMMAREA_STG CHAR(130) VARYING; /**************************************************************/ /* ASSIGN VALUES TO INPUT PARAMETERS PARM_LEVEL, PGM_NAME, */ /* MIRROR_TRANS, COMMAREA, COMMAREA_TOTAL_LEN, AND SYNC_OPTS. */ /* SET THE OTHER INPUT PARAMETERS TO NULL. THE DSNACICX */ /* USER EXIT MUST ASSIGN VALUES FOR THOSE PARAMETERS. */ /**************************************************************/ PARM_LEVEL = 1; IND_PARM_LEVEL = 0; PGM_NAME = 'CICSPGM1'; IND_PGM_NAME = 0 ; MIRROR_TRANS = 'MIRT'; IND_MIRROR_TRANS = 0; P1 = ADDR(COMMAREA_STG); COMMAREA_INPUT = 'THIS IS THE INPUT FOR CICSPGM1'; COMMAREA_OUTPUT = ' '; COMMAREA_LEN = LENGTH(COMMAREA_INPUT); IND_COMMAREA = 0; COMMAREA_TOTAL_LEN = COMMAREA_LEN + LENGTH(COMMAREA_OUTPUT); IND_COMMAREA_TOTAL_LEN = 0; SYNC_OPTS = 1; IND_SYNC_OPTS = 0; IND_CICS_APPLID= -1; IND_CICS_LEVEL = -1; IND_CONNECT_TYPE = -1; IND_NETNAME = -1; /*****************************************/ /* INITIALIZE OUTPUT PARAMETERS TO NULL. */
Appendix J. DB2-supplied stored procedures
1293
/*****************************************/ IND_RETCODE = -1; IND_MSG_AREA= -1; /*****************************************/ /* CALL DSNACICS TO INVOKE CICSPGM1. */ /*****************************************/ EXEC SQL CALL SYSPROC.DSNACICS(:PARM_LEVEL :IND_PARM_LEVEL, :PGM_NAME :IND_PGM_NAME, :CICS_APPLID :IND_CICS_APPLID, :CICS_LEVEL :IND_CICS_LEVEL, :CONNECT_TYPE :IND_CONNECT_TYPE, :NETNAME :IND_NETNAME, :MIRROR_TRANS :IND_MIRROR_TRANS, :COMMAREA_STG :IND_COMMAREA, :COMMAREA_TOTAL_LEN :IND_COMMAREA_TOTAL_LEN, :SYNC_OPTS :IND_SYNC_OPTS, :RET_CODE :IND_RETCODE, :MSG_AREA :IND_MSG_AREA);
DSNACICS output
DSNACICS places the return code from DSNACICS execution in the return-code parameter. If the value of the return code is non-zero, DSNACICS puts its own error messages and any error messages that are generated by CICS and the DSNACICX user exit routine in the msg-area parameter. The COMMAREA parameter contains the COMMAREA for the CICS server program that DSNACICS calls. The COMMAREA parameter has a VARCHAR type. Therefore, if the server program puts data other than character data in the COMMAREA, that data can become corrupted by code page translation as it is passed to the caller. To avoid code page translation, you can change the COMMAREA parameter in the CREATE PROCEDURE statement for DSNACICS to VARCHAR(32704) FOR BIT DATA. However, if you do so, the client program might need to do code page translation on any character data in the COMMAREA to make it readable.
DSNACICS restrictions
Because DSNACICS uses the distributed program link (DPL) function to invoke CICS server programs, server programs that you invoke through DSNACICS can contain only the CICS API commands that the DPL function supports. The list of supported commands is documented in CICS Transaction Server for z/OS Application Programming Reference. # # # DSNACICS does not propagate the transaction identifier (XID) of the thread. The stored procedure runs under a new private context rather than under the native context of the task that called it.
DSNACICS debugging
If you receive errors when you call DSNACICS, ask your system administrator to add a DSNDUMP DD statement in the startup procedure for the address space in which DSNACICS runs. The DSNDUMP DD statement causes DB2 to generate an SVC dump whenever DSNACICS issues an error message.
1294
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL DSNLEUSR (
Type,
AuthID NULL
LinkName NULL
NewAuthID NULL
Password NULL
ReturnCode, MsgArea )
1295
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
SYSIBM.USERNAMES. AuthID is an input parameter of type VARCHAR(128). If you specify a null value, DSNLEUSR does not insert a value for AuthID. LinkName Specifies the value that is to be inserted into the LINKNAME column of SYSIBM.USERNAMES. LinkName is an input parameter of type CHAR(8). If you specify a null value, DSNLEUSR does not insert a value for LinkName. NewAuthID Specifies the value that is to be inserted into the NEWAUTHID column of SYSIBM.USERNAMES. NewAuthID is an input parameter of type VARCHAR(119). Although the NEWAUTHID field of SYSIBM.USERNAMES is VARCHAR(128), your input value is restricted to 119 or fewer bytes. If you specify a null value, DSNLEUSR does not insert a value for NewAuthID. Password Specifies the value that is to be inserted into the PASSWORD column of SYSIBM.USERNAMES. Password is an input parameter of type CHAR(8). Although the PASSWORD field of SYSIBM.USERNAMES is VARCHAR(24), your input value is restricted to 8 or fewer bytes. If you specify a null value, DSNLEUSR does not insert a value for Password. ReturnCode The return code from DSNLEUSR execution. Possible values are: 0 8 DSNLEUSR executed successfully. The request to encrypt the translated authorization ID or password failed. MsgArea contains the following fields: v An unformatted SQLCA that describes the error. See Appendix D of DB2 SQL Reference for the description of the SQLCA. v A string that contains a DSNL045I message with the ICSF return code, the ICSF reason code, and the ICSF function that failed. The string immediately follows the SQLCA field and does not begin with a length field. The insert operation for the SYSIBM.USERNAMES row failed. MsgArea contains an SQLCA that describes the error. DSNLEUSR terminated because the DB2 subsystem is not in DB2 Version 8 new-function mode. MsgArea contains an SQLCA that describes the error.
12 16
ReturnCode is an output parameter of type INTEGER. MsgArea Contains information about DSNLEUSR execution. The information that is returned is described in the ReturnCode description. MsgArea is an output parameter of type VARCHAR(500).
1296
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
.WORKING-STORAGE SECTION. . . *********************** * DSNLEUSR PARAMETERS * *********************** 01 TYPE PICTURE X(1). 01 AUTHID. 49 AUTHID-LN PICTURE S9(4). 49 AUTHID-DTA PICTURE X(128). 01 LINKNAME PICTURE X(8). 01 NEWAUTHID. 49 NEWAUTHID-LN PICTURE S9(4). 49 NEWAUTHID-DTA PICTURE X(119). 01 PASSWORD PICTURE X(8). 01 RETURNCODE PICTURE S9(9) COMP VALUE +0. 01 MSGAREA. 49 MSGAREA-LN PICTURE S9(4) COMP VALUE 500. 49 MSGAREA-DTA PICTURE X(500) VALUE SPACES. ***************************************** * INDICATOR VARIABLES. * ***************************************** 01 TYPE-IND PICTURE S9(4) COMP-4. 01 AUTHID-IND PICTURE S9(4) COMP-4. 01 LINKNAME-IND PICTURE S9(4) COMP-4. 01 NEWAUTHID-IND PICTURE S9(4) COMP-4. 01 PASSWORD-IND PICTURE S9(4) COMP-4. 01 RETURNCODE-IND PICTURE S9(4) COMP-4. 01 MSGAREA-IND PICTURE S9(4) COMP-4. .PROCEDURE DIVISION. . . ********************************************************* * SET VALUES FOR DSNLEUSR INPUT PARAMETERS. * * THE SET OF INPUT VALUES REPRESENTS A ROW THAT * * DSNLEUSR INSERTS INTO SYSIBM.USERNAMES WITH * * ENCRYPTED NEWAUTHID AND PASSWORD VALUES. * ********************************************************* MOVE 'O' TO TYPE. MOVE 0 TO AUTHID-LN. MOVE SPACES TO AUTHID-DTA. MOVE 'SYEC1B ' TO LINKNAME. MOVE 4 TO NEWAUTHID-LN. MOVE 'MYID' TO NEWAUTHID-DTA. MOVE 'MYPASS' TO PASSWORD. ***************** * CALL DSNLEUSR * ***************** EXEC SQL CALL SYSPROC.DSNLEUSR (:TYPE :TYPE-IND, :AUTHID :AUTHID-IND, :LINKNAME :LINKNAME-IND, :NEWAUTHID :NEWAUTHID-IND, :PASSWORD :PASSWORD-IND, :RETURNCODE :RETURNCODE-IND, :MSGAREA :MSGAREA-IND) END-EXEC.
DSNLEUSR output
If DSNLEUSR executes successfully, it inserts a row into SYSIBM.USERNAMES with encrypted values for the NEWAUTHID and PASSWORD columns and returns 0 for the ReturnCode parameter value. If DSNLEUSR does not execute successfully, it returns a non-zero value for the ReturnCode value and additional diagnostic information for the MsgArea parameter value. See DSNLEUSR option descriptions on page 1295 for a description of the ReturnCode and MsgArea contents.
1297
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL SYSPROC.DSNAIMS (
dsnaims-function ,
xcf-group-name ,
xcf-ims-name , racf-userid ,
racf-groupid NULL ,
ims-modname NULL ,
ims-data-out NULL
otma-tpipe-name NULL
1298
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
an IMS full function or a fast path. SENDRECV does not support multiple iterations of a conversational transaction SEND Sends IMS data. SEND invokes an IMS transaction or command, but does not receive IMS data. If result data exists, it can be retrieved with the RECEIVE function. A send-only transaction cannot be an IMS fast path transaction or a conversations transaction. RECEIVE Receives IMS data. The data can be the result of a transaction or command initiated by the SEND function or an unsolicited output message from an IMS application. The RECEIVE function does not initiate an IMS transaction or command. dsnaims-2pc Specifies whether to use a two-phase commit process to perform the transaction syncpoint service. Possible values are Y or N. For N, commits and rollbacks that are issued by the IMS transaction do not affect commit and rollback processing in the DB2 application that invokes DSNAIMS. Furthermore, IMS resources are not affected by commits and rollbacks that are issued by the calling DB2 application. If you specify Y, you must also specify SENDRECV. To use a two-phase commit process, you must set the IMS control region parameter (RRS) to Y. This parameter is optional. The default is N. xcf-group-name Specifies the XCF group name that the IMS OTMA joins. You can obtain this name by viewing the GRNAME parameter in IMS PROCLIB member DFSPBxxx or by using the IMS command /DISPLAY OTMA. xcf-ims-name Specifies the XCF member name that IMS uses for the XCF group. If IMS is not using the XRF or RSR feature, you can obtain the XCF member name from the OTMANM parameter in IMS PROCLIB member DFSPBxxx. If IMS is using the XRF or RSR feature, you can obtain the XCF member name from the USERVAR parameter in IMS PROCLIB member DFSPBxxx. racf-userid Specifies the RACF user ID that is used for IMS to perform the transaction or command authorization checking. This parameter is required if DSNAIMS is running APF-authorized. If DSNAIMS is running unauthorized, this parameter is ignored and the EXTERNAL SECURITY setting for the DSNAIMS stored procedure definition determines the user ID that is used by IMS. racf-groupid Specifies the RACF group ID that is used for IMS to perform the transaction or command authorization checking. racf_groupid is used for stored procedures that are APF-authorized. It is ignored for other stored procedures. ims-lterm Specifies an IMS LTERM name that is used to override the LTERM name in the I/O program communication block of the IMS application program. This field is used as an input and an output field: v For SENDRECV, the value is sent to IMS on input and can be updated by IMS on output. v For SEND, the parameter is IN only. v For RECEIVE, the parameter is OUT only. An empty or NULL value tells IMS to ignore the parameter.
Appendix J. DB2-supplied stored procedures
1299
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
ims-modname Specifies the formatting map name that is used by the server to map output data streams, such as 3270 streams. Although this invocation does not have IMS MFS support, the input MODNAME can be used as the map name to define the output data stream. This name is an 8-byte message output descriptor name that is placed in the I/O program communication block. When the message is inserted, IMS places this name in the message prefix with the map name in the program communication block of the IMS application program. For SENDRECV, the value is sent to IMS on input, and can be updated on output. For SEND, the parameter is IN only. For RECEIVE it is OUT only. IMS ignores the parameter when it is an empty or NULL value. ims-tran-name Specifies the name of an IMS transaction or command that is sent to IMS. If the IMS command is longer than eight characters, specify the first eight characters (including the / of the command). Specify the remaining characters of the command in the ims-tran-name parameter. If you use an empty or NULL value, you must specify the full transaction name or command in the ims-data-in parameter. ims-data-in Specifies the data that is sent to IMS. This parameter is required in each of the following cases: v Input data is required for IMS v No transaction name or command is passed in ims-tran-name v The command is longer than eight characters This parameter is ignored when for RECEIVE functions. ims-data-out Data returned after successful completion of the transaction. This parameter is required for SENDRECV and RECEIVE functions. The parameter is ignored for SEND functions. otma-tpipe-name Specifies an 8-byte user-defined communication session name that IMS uses for the input and output data for the transaction or the command in a SEND or a RECEIVE function. If the otma_tpipe_name parameter is used for a SEND function to generate an IMS output message, the same otma_pipe_name must be used to retrieve output data for the subsequent RECEIVE function. otma-dru-name Specifies the name of an IMS user-defined exit routine, OTMA destination resolution user exit routine, if it is used. This IMS exit routine can format part of the output prefix and can determine the output destination for an IMS ALT_PCB output. If an empty or null value is passed, IMS ignores this parameter. user-data-in This optional parameter contains any data that is to be included in the IMS message prefix, so that the data can be accessed by IMS OTMA user exit routines (DFSYIOE0 and DFSYDRU0) and can be tracked by IMS log records. IMS applications that run in dependent regions do not access this data. The specified user data is not included in the output message prefix. You can use this parameter to store input and output correlator tokens or other information. This parameter is ignored for RECEIEVE functions.
1300
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
user-data-out On output, this field contains the user-data-in in the IMS output prefix. IMS user exit routines (DFSYIOE0 and DFSYDRU0) can also create user-data-out for SENDRECV and RECEIVE functions. The parameter is not updated for SEND functions. status-message Indicates any error message that is returned from the transaction or command, OTMA, RRS, or DSNAIMS. return-code Indicates the return code that is returned for the transaction or command, OTMA, RRS, or DSNAIMS.
1301
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
5. To ensure that you connect to the intended IMS target, consistently use the XFC group and member names that you associate with each stored procedure instance. Example:
CALL SYSPROC.DSNAIMS("SENDRECV", "N", "IMS7GRP", "IMS7TMEM", ...) CALL SYSPROC.DSNAIMSB("SENDRECV", "N", "IMS8GRP", "IMS8TMEM", ...)
1302
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
, xcf-group-name ,
xcf-ims-name , racf-userid ,
racf-groupid NULL ,
ims-modname NULL ,
ims-data-out NULL
otma-tpipe-name NULL
, user-data-out , status-message ,
1303
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
using the XRF or RSR feature, you can obtain the XCF member name from the OTMANM parameter in IMS PROCLIB member DFSPBxxx. If IMS is using the XRF or RSR feature, you can obtain the XCF member name from the USERVAR parameter in IMS PROCLIB member DFSPBxxx. racf-userid Specifies the RACF user ID that is used for IMS to perform the transaction or command authorization checking. This parameter is required if DSNAIMS2 is running APF-authorized. If DSNAIMS2 is running unauthorized, this parameter is ignored and the EXTERNAL SECURITY setting for the DSNAIMS2 stored procedure definition determines the user ID that is used by IMS. racf-groupid Specifies the RACF group ID that is used for IMS to perform the transaction or command authorization checking. racf_groupid is used for stored procedures that are APF-authorized. It is ignored for other stored procedures. ims-lterm Specifies an IMS LTERM name that is used to override the LTERM name in the I/O program communication block of the IMS application program. This field is used as an input and an output field: v For SENDRECV, the value is sent to IMS on input and can be updated by IMS on output. v For SEND, the parameter is IN only. v For RECEIVE, the parameter is OUT only. An empty or NULL value tells IMS to ignore the parameter. ims-modname Specifies the formatting map name that is used by the server to map output data streams, such as 3270 streams. Although this invocation does not have IMS MFS support, the input MODNAME can be used as the map name to define the output data stream. This name is an 8-byte message output descriptor name that is placed in the I/O program communication block. When the message is inserted, IMS places this name in the message prefix with the map name in the program communication block of the IMS application program. For SENDRECV, the value is sent to IMS on input, and can be updated on output. For SEND, the parameter is IN only. For RECEIVE it is OUT only. IMS ignores the parameter when it is an empty or NULL value. ims-tran-name Specifies the name of an IMS transaction or command that is sent to IMS. If the IMS command is longer than eight characters, specify the first eight characters (including the / of the command). Specify the remaining characters of the command in the ims-tran-name parameter. If you use an empty or NULL value, you must specify the full transaction name or command in the ims-data-in parameter. ims-data-in Specifies the data that is sent to IMS. This parameter is required in each of the following cases: v Input data is required for IMS v No transaction name or command is passed in ims-tran-name v The command is longer than eight characters This parameter is ignored when for RECEIVE functions.
1304
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
ims-data-out Data returned after successful completion of the transaction. This parameter is required for SENDRECV and RECEIVE functions. The parameter is ignored for SEND functions. otma-tpipe-name Specifies an 8-byte user-defined communication session name that IMS uses for the input and output data for the transaction or the command in a SEND or a RECEIVE function. If the otma_tpipe_name parameter is used for a SEND function to generate an IMS output message, the same otma_pipe_name must be used to retrieve output data for the subsequent RECEIVE function. otma-dru-name Specifies the name of an IMS user-defined exit routine, OTMA destination resolution user exit routine, if it is used. This IMS exit routine can format part of the output prefix and can determine the output destination for an IMS ALT_PCB output. If an empty or null value is passed, IMS ignores this parameter. user-data-in This optional parameter contains any data that is to be included in the IMS message prefix, so that the data can be accessed by IMS OTMA user exit routines (DFSYIOE0 and DFSYDRU0) and can be tracked by IMS log records. IMS applications that run in dependent regions do not access this data. The specified user data is not included in the output message prefix. You can use this parameter to store input and output correlator tokens or other information. This parameter is ignored for RECEIVE functions. user-data-out On output, this field contains the user-data-in in the IMS output prefix. IMS user exit routines (DFSYIOE0 and DFSYDRU0) can also create user-data-out for SENDRECV and RECEIVE functions. The parameter is not updated for SEND functions. status-message Indicates any error message that is returned from the transaction or command, OTMA, RRS, or DSNAIMS2. otma-data-inseg Specifies the number of segments followed by the lengths of the segments to be sent to IMS. All values should be separated by semicolons. This field is required to send multi-segment input to IMS. For single-segment transactions and commands, set the field to NULL, 0 or 0;. return-code Indicates the return code that is returned for the transaction or command, OTMA, RRS, or DSNAIMS2.
1305
# # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Environment
DSNAEXP must run in a WLM-established stored procedure address space. Before you can invoke DSNAEXP, table sqlid.PLAN_TABLE must exist. sqlid is the value that you specify for the sqlid input parameter when you call DSNAEXP. Job DSNTESC in DSN8810.SDSNSAMP contains a sample CREATE TABLE statement for the PLAN_TABLE.
1306
Administration Guide
Authorization required
To execute the CALL DSN8.DSNAEXP statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v The EXECUTE privilege on the package for DSNAEXP v Ownership of the package v PACKADM authority for the package collection v SYSADM authority In addition: v The SQL authorization ID of the process in which DSNAEXP is called must have the authority to execute SET CURRENT SQLID=sqlid. v The SQL authorization ID of the process must also have one of the following characteristics: Be the owner of a plan table named PLAN_TABLE Have an alias on a plan table named owner.PLAN_TABLE and have SELECT and INSERT privileges on the table
CALL DSNAEXP ( sqlid, queryno, sql-statement, parse, qualifier, sqlcode, sqlstate, error-message )
1307
names in the input SQL statement. Valid values are 'Y' and 'N'. If the value of parse is 'Y', qualifier must contain a valid SQL qualifier name. If sql-statement is insert-within-select and common table expressions, you need to disable the parsing functionality, and add the qualifier manually. parse is an input parameter of type CHAR(1). qualifier Specifies the qualifier that DSNAEXP adds to unqualified table or view names in the input SQL statement. If the value of parse is 'N', qualifier is ignored. If the statement on which EXPLAIN is run contains an INSERT within a SELECT or a common table expression, parse must be 'N', and table and view qualifiers must be explicitly specified. qualifier is an input parameter of type CHAR(8). sqlcode Contains the SQLCODE from execution of the EXPLAIN statement. sqlcode is an output parameter of type INTEGER. sqlstate Contains the SQLSTATE from execution of the EXPLAIN statement. sqlstate is an output parameter of type CHAR(5). error-message Contains information about DSNAEXP execution. If the SQLCODE from execution of the EXPLAIN statement is not 0, error-message contains the error message for the SQLCODE. error-message is an output parameter of type VARCHAR(960).
1308
Administration Guide
/* Initialize the output parameters */ hvsqlcode=0; for (i = 0; i < 5; i++) hvsqlstate[i] = '0'; hvsqlstate[5]='\0'; hvmsg.hvmsg_len=0; for (i = 0; i < 960; i++) hvmsg.hvmsg_text[i] = ' '; hvmsg.hvmsg_text[960] = '\0'; /* Call DSNAEXP to do EXPLAIN and put output in ADMF001.PLAN_TABLE */ EXEC SQL CALL DSN8.DSNAEXP(:hvsqlid, :hvqueryno, :hvsql_stmt, :hvparse, :hvqualifier, :hvsqlcode, :hvsqlstate, :hvmsg);
DSNAEXP output
If DSNAEXP executes successfully, sqlid.PLAN_TABLE contains the EXPLAIN output. A user with SELECT authority on sqlid.PLAN_TABLE can obtain the results of the EXPLAIN that was executed by DSNAEXP by executing this query:
SELECT * FROM sqlid.PLAN_TABLE WHERE QUERYNO='queryno';
If DSNAEXP does not execute successfully, sqlcode, sqlstate, and error-message contain error information. | | | | | | | | | | | | | | | | | | |
Environment
ADMIN_COMMAND_DB2 must run in a WLM-established stored procedure address space.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v v v v The EXECUTE privilege on the package for DSNADMCD Ownership of the package PACKADM authority for the package collection SYSADM authority
To execute the DB2 command, you must use a privilege set that includes the authorization to execute the DB2 command, as described in the DB2 Command Reference.
1309
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
CALL SYSPROC.ADMIN_COMMAND_DB2 ( DB2-command, command-length, parse-type, DB2-member NULL , commands-executed, IFI-return-code, IFI-reason-code, excess-bytes,
Option descriptions
DB2-command Specifies any DB2 command such as -DISPLAY THREAD(*), or multiple DB2 commands. With multiple DB2 commands, use \0 to delimit the commands. The DB2 command is executed using the authorization ID of the user who invoked the stored procedure. This is an input parameter of type VARCHAR(32704) and cannot be null. command-length Specifies the length of the DB2 command or commands. When multiple DB2 commands are specified in DB2-command, command-length is the sum of all of those commands, including the \0 command delimiters. This is an input parameter of type INTEGER and cannot be null. parse-type Identifies the type of output message parsing requested. If you specify a parse type, ADMIN_COMMAND_DB2 parses the command output messages and provides the formatted result in a global temporary table. Possible values are: BP DB TS IX THD UT GRP DDF Parse -DISPLAY BUFFERPOOL command output messages. Parse -DISPLAY DATABASE command output messages and return database information. Parse -DISPLAY DATABASE(...) SPACENAM(...) command output messages and return table spaces information. Parse -DISPLAY DATABASE(...) SPACENAM(...) command output messages and return index spaces information. Parse -DISPLAY THREAD command output messages. Parse -DISPLAY UTILITY command output messages. Parse -DISPLAY GROUP command output messages. Parse -DISPLAY DDF command output messages.
Any other value Do not parse any command output messages. This is an input parameter of type VARCHAR(3) and cannot be null.
1310
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
DB2-member Specifies the name of a single data sharing group member on which an IFI request is to be executed This is an input parameter of type VARCHAR(8). commands-executed Provides the number of commands that were executed This is an output parameter of type INTEGER. IFI-return-code Provides the IFI return code This is an output parameter of type INTEGER. IFI-reason-code Provides the IFI reason code This is an output parameter of type INTEGER. excess-bytes Indicates the number of bytes that did not fit in the return area This is an output parameter of type INTEGER. group-IFI-reason-code Provides the reason code for the situation in which an IFI call requests data from members of a data sharing group, and not all the data is returned from group members. This is an output parameter of type INTEGER. group-excess-bytes Indicates the total length of data that was returned from other data sharing group members and did not fit in the return area This is an output parameter of type INTEGER. return-code Provides the return code from the stored procedure. Possible values are: 0 The stored procedure did not encounter an SQL error during processing. Check the IFI-return-code value to determine whether the DB2 command issued using the instrumentation facility interface (IFI) was successful or not. The stored procedure encountered an SQL error during processing. The message output parameter contains messages describing the SQL error.
12
This is an output parameter of type INTEGER. message Contains messages describing the SQL error encountered by the stored procedure. If no SQL error occurred, then no message is returned. The first messages in this area are generated by the stored procedure. Messages that are generated by DB2 might follow the first messages. This is an output parameter of type VARCHAR(1331).
Example
The following C language sample shows how to invoke ADMIN_COMMAND_DB2:
1311
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_COMMAND_DB2 parameters char command[32705]; /* DB2 command short int ind_command; /* Indicator variable long int lencommand; /* DB2 command length short int ind_lencommand; /* Indicator variable char parsetype[4]; /* Parse type required short int ind_parsetype; /* Indicator variable char mbrname[9]; /* DB2 data sharing group /* member name short int ind_mbrname; /* Indicator variable long int excommands; /* Number of commands exec. short int ind_excommands; /* Indicator variable long int retifca; /* IFI return code short int ind_retifca; /* Indicator variable long int resifca; /* IFI reason code short int ind_resifca; /* Indicator variable long int xsbytes; /* Excessive bytes short int ind_xsbytes; /* Indicator variable long int gresifca; /* IFI group reason code short int ind_gresifca; /* Indicator variable long int gxsbytes; /* Group excessive bytes short int ind_gxsbytes; /* Indicator variable long int retcd; /* Return code short int ind_retcd; /* Indicator variable char errmsg[1332]; /* Error message short int ind_errmsg; /* Indicator variable /* Result Set Locators volatile SQL TYPE IS RESULT_SET_LOCATOR * rs_loc1, rs_loc2; /* First result set row long int rownum; char text[81]; /* Sequence number of the /* table row /* Command output /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* DDF table sequence DDF status DDF location DDF luname DDF generic lu DDF IPv4 address DDF IPv6 address Indicator variable DDF tcpport DDF resport DDF sql domain DDF resync domain Indicator variable DDF secure port Indicator variable DDF IPNAME Indicator variable DDF alias 1 name */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */
*/ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */
/* Second result set row long int ddfrownum; char ddfstat[7]; char ddfloc[19]; char ddflunm[18]; char ddfgenlu[18]; char ddfv4ipaddr[18]; char ddfv6ipaddr[40]; short int ind_ddfv6ipaddr; long int ddftcpport; long int ddfresport; char ddfsqldom[46]; char ddfrsyncdom[46]; short int ind_ddfrsyncdom; long int ddfsecport; short int ind_ddfsecport; char ddfipname[9]; short int ind_ddfipname; char ddfaliasname1[19];
1312
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
short int ind_ddfaliasname1; long int ddfaliasport1; short int ind_ddfaliasport1; long int ddfaliassecport1; short int ind_ddfaliassecport1; char ddfaliasname2[19]; short int ind_ddfaliasname2; long int ddfaliasport2; short int ind_ddfaliasport2; long int ddfaliassecport2; short int ind_ddfaliassecport2; char ddfaliasname3[19]; short int ind_ddfaliasname3; long int ddfaliasport3; short int ind_ddfaliasport3; long int ddfaliassecport3; short int ind_ddfaliassecport3; char ddfaliasname4[19]; short int ind_ddfaliasname4; long int ddfaliasport4; short int ind_ddfaliasport4; long int ddfaliassecport4; short int ind_ddfaliassecport4; char ddfaliasname5[19]; short int ind_ddfaliasname5; long int ddfaliasport5; short int ind_ddfaliasport5; long int ddfaliassecport5; short int ind_ddfaliassecport5; char ddfaliasname6[19]; short int ind_ddfaliasname6; long int ddfaliasport6; short int ind_ddfaliasport6; long int ddfaliassecport6; short int ind_ddfaliassecport6; char ddfaliasname7[19]; short int ind_ddfaliasname7; long int ddfaliasport7; short int ind_ddfaliasport7; long int ddfaliassecport7; short int ind_ddfaliassecport7; char ddfaliasname8[19]; short int ind_ddfaliasname8; long int ddfaliasport8; short int ind_ddfaliasport8; long int ddfaliassecport8; short int ind_ddfaliassecport8; char ddfmbripv4addr[18]; short int ind_ddfmbripv4addr; char ddfmbripv6addr[40]; short int ind_ddfmbripv6addr; EXEC SQL END DECLARE SECTION;
/* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /*
Indicator variable DDF alias 1 TCP/IP port Indicator variable DDF alias 1 secure port Indicator variable DDF alias 2 name Indicator variable DDF alias 2 TCP/IP port Indicator variable DDF alias 2 secure port Indicator variable DDF alias 3 name Indicator variable DDF alias 3 TCP/IP port Indicator variable DDF alias 3 secure port Indicator variable DDF alias 4 name Indicator variable DDF alias 4 TCP/IP port Indicator variable DDF alias 4 secure port Indicator variable DDF alias 5 name Indicator variable DDF alias 5 TCP/IP port Indicator variable DDF alias 5 secure port Indicator variable DDF alias 6 name Indicator variable DDF alias 6 TCP/IP port Indicator variable DDF alias 6 secure port Indicator variable DDF alias 7 name Indicator variable DDF alias 7 TCP/IP port Indicator variable DDF alias 7 secure port Indicator variable DDF alias 8 name Indicator variable DDF alias 8 TCP/IP port Indicator variable DDF alias 8 secure port Indicator variable DDF DSG member IPv4 addr Indicator variable DDF DSG member IPv6 addr Indicator variable
*/ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */
/******************************************************************/ /* Assign values to input parameters to execute the DB2 */ /* command "-DISPLAY DDF" */ /* Set the indicator variables to 0 for non-null input parameters */ /* Set the indicator variables to -1 for null input parameters */ /******************************************************************/ strcpy(command, "-DISPLAY DDF"); ind_command = 0; lencommand = strlen(command); ind_lencommand = 0; strcpy(parsetype, "DDF"); ind_parsetype = 0; ind_mbrname = -1;
1313
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
/******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_COMMAND_DB2 */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_COMMAND_DB2 (:command :ind_command, :lencommand :ind_lencommand, :parsetype :ind_parsetype, :mbrname :ind_mbrname, :excommands :ind_excommands, :retifca :ind_retifca, :resifca :ind_resifca, :xsbytes :ind_xsbytes, :gresifca :ind_gresifca, :gxsbytes :ind_gxsbytes, :retcd :ind_retcd, :errmsg :ind_errmsg); /******************************************************************/ /* Retrieve result set(s) when the SQLCODE from the call is +466, */ /* which indicates that result sets were returned */ /******************************************************************/ if (SQLCODE == +466) /* Result sets were returned */ { /* ESTABLISH A LINK BETWEEN EACH RESULT SET AND ITS LOCATOR */ EXEC SQL ASSOCIATE LOCATORS (:rs_loc1, :rs_loc2) WITH PROCEDURE SYSPROC.ADMIN_COMMAND_DB2; /* ASSOCIATE A CURSOR WITH EACH RESULT SET EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :rs_loc1; EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :rs_loc2; /* PERFORM FETCHES USING C1 TO RETRIEVE ALL ROWS FROM THE /* FIRST RESULT SET EXEC SQL FETCH C1 INTO :rownum, :text; while(SQLCODE == 0) { EXEC SQL FETCH C1 INTO :rownum, :text; } /* PERFORM FETCHES USING C2 TO RETRIEVE THE -DISPLAY DDF /* PARSED OUTPUT FROM THE SECOND RESULT SET EXEC SQL FETCH C2 INTO :ddfrownum, :ddfstat, :ddfloc, :ddflunm, :ddfgenlu, :ddfv4ipaddr, :ddfv6ipaddr:ind_ddfv6ipaddr, :ddftcpport, :ddfresport, :ddfsqldom, :ddfrsyncdom:ind_ddfrsyncdom, :ddfsecport:ind_ddfsecport, :ddfipname:ind_ddfipname, :ddfaliasname1:ind_ddfaliasname1, :ddfaliasport1:ind_ddfaliasport1, :ddfaliassecport1:ind_ddfaliassecport1, :ddfaliasname2:ind_ddfaliasname2, :ddfaliasport2:ind_ddfaliasport2, :ddfaliassecport2:ind_ddfaliassecport2, :ddfaliasname3:ind_ddfaliasname3, :ddfaliasport3:ind_ddfaliasport3, :ddfaliassecport3:ind_ddfaliassecport3, :ddfaliasname4:ind_ddfaliasname4, :ddfaliasport4:ind_ddfaliasport4, :ddfaliassecport4:ind_ddfaliassecport4, :ddfaliasname5:ind_ddfaliasname5, :ddfaliasport5:ind_ddfaliasport5, :ddfaliassecport5:ind_ddfaliassecport5, :ddfaliasname6:ind_ddfaliasname6, */ */ */
*/ */
1314
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
:ddfaliasport6:ind_ddfaliasport6, :ddfaliassecport6:ind_ddfaliassecport6, :ddfaliasname7:ind_ddfaliasname7, :ddfaliasport7:ind_ddfaliasport7, :ddfaliassecport7:ind_ddfaliassecport7, :ddfaliasname8:ind_ddfaliasname8, :ddfaliasport8:ind_ddfaliasport8, :ddfaliassecport8:ind_ddfaliassecport8, :ddfmbripv4addr:ind_ddfmbripv4addr, :ddfmbripv6addr:ind_ddfmbripv6addr; } return(retcd); }
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1310: v commands-executed v IFI-return-code v IFI-reason-code v excess-bytes v group-IFI-reason-code v group-excess-bytes v return-code v message In addition to the preceding output, the stored procedure returns two result sets. The first result set is returned in the created global temporary table SYSIBM.DB2_CMD_OUTPUT and contains the DB2 command output messages that were not parsed. The following table shows the format of the first result set:
Table 341. Result set row for first ADMIN_COMMAND_DB2 result set Column name ROWNUM TEXT Data type INTEGER CHAR(80) Contents Sequence number of the table row, from 1 to n DB2 command output message line
The format of the second result set varies, depending on the DB2 command issued and the parse-type value. v Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = BP) v Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = THD) v Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = UT) v Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = DB or TS or IX)
1315
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = GRP) v Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = DDF) The following table shows the format of the result set returned in the created global temporary table SYSIBM.BUFFERPOOL_STATUS when parse-type = BP:
Table 342. Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = BP) Column name ROWNUM BPNAME VPSIZE VPSEQT VPPSEQT VPXPSEQT DWQT PCT_VDWQT Data type INTEGER CHAR(6) INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER Contents Sequence number of the table row, from 1 to n Buffer pool name Buffer pool size Sequential steal threshold for the buffer pool Parallel sequential threshold for the buffer pool Assisting parallel sequential threshold for the buffer pool Deferred write threshold for the buffer pool Vertical deferred write threshold for the buffer pool (as a percentage of virtual buffer pool size) Vertical deferred write threshold for the buffer pool (as absolute number of buffers) Page-stealing algorithm that DB2 uses for the buffer pool Buffer pool internal identifier Number of open table spaces or index spaces that reference this buffer pool Specifies whether the buffer pool should be fixed in real storage when it is used
ABS_VDWQT
INTEGER
PGSTEAL ID USE_COUNT
PGFIX
CHAR(3)
The following table shows the format of the result set returned in the created global temporary table SYSIBM.DB2_THREAD_STATUS when parse-type = THD:
Table 343. Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = THD) Column name ROWNUM Data type INTEGER Contents Sequence number of the table row, from 1 to n
1316
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 343. Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = THD) (continued) Column name TYPE Data type INTEGER Contents Thread type: 0 1 2 3 4 NAME STATUS ACTIVE CHAR(8) CHAR(11) CHAR(1) Unknown Active Inactive Indoubt Postponed
Connection name used to establish the thread Status of the conversation or socket Indicates whether a thread is active or not. An asterisk means that the thread is active within DB2. Current number of DB2 requests on the thread Recovery correlation ID associated with the thread Authorization ID associated with the thread Plan name associated with the thread Address space identifier Unique thread identifier Name of the two-phase commit coordinator Indicates whether or not the thread needs to be reset to purge info from the indoubt thread report Unit of recovery identifier Logical unit of work ID of the thread Client workstation name Client user ID Client application name Client accounting information Location name of the remote system Additional thread information
The following table shows the format of the result set returned in the created global temporary table SYSIBM.UTILITY_JOB_STATUS when parse-type = UT:
Appendix J. DB2-supplied stored procedures
1317
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 344. Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = UT) Column name ROWNUM CSECT Data type INTEGER CHAR(8) Contents Sequence number of the table row, from 1 to n Name of the command program CSECT that issued the message User ID of the person running the utility Utility job is running on this member Utility job identifier Utility statement number Utility name Utility restart from the beginning of this phase Number of pages or records processed in a utility phase Utility status Additional utility information Total number of objects in the list of objects the utility is processing Last object that started
USER MEMBER UTILID STATEMENT UTILITY PHASE COUNT STATUS DETAIL NUM_OBJ
CHAR(8) CHAR(8) CHAR(16) INTEGER CHAR(20) CHAR(20) INTEGER CHAR(18) VARCHAR(4050) INTEGER
LAST_OBJ
INTEGER
The following table shows the format of the result set returned in the created global temporary table SYSIBM.DB_STATUS when parse-type = DB or TS or IX:
Table 345. Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = DB or TS or IX) Column name ROWNUM DBNAME SPACENAM TYPE Data type INTEGER CHAR(8) CHAR(8) CHAR(2) Contents Sequence number of the table row, from 1 to n Name of the database Name of the table space or index Status type: DB TS IX PART STATUS SMALLINT CHAR(18) Database Table space Index
Individual partition or range of partition Status of the database, table space or index
1318
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The following table shows the format of the result set returned in the created global temporary table SYSIBM.DATA_SHARING_GROUP when parse-type = GRP:
Table 346. Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = GRP) Column name ROWNUM DB2_MEMBER ID SUBSYS CMDPREF STATUS DB2_LVL SYSTEM_NAME Data type INTEGER CHAR(8) INTEGER CHAR(4) CHAR(8) CHAR(8) CHAR(3) CHAR(8) Contents Sequence number of the table row, from 1 to n Name of the DB2 group member ID of the DB2 group member Subsystem name of the DB2 group member Command prefix for the DB2 group member Status of the DB2 group member DB2 version, release and modification level Name of the z/OS system where the member is running, or was last running in cases when the member status is QUIESCED or FAILED Name of the IRLM subsystem to which the DB2 member is connected Procedure name of the connected IRLM
IRLM_SUBSYS
CHAR(4)
IRLMPROC
CHAR(8)
The following table shows the format of the result set returned in the created global temporary table SYSIBM.DDF_CONFIG when parse-type = DDF:
Table 347. Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = DDF) Column name ROWNUM STATUS LOCATION LUNAME GENERICLU IPV4ADDR Data type INTEGER NOT NULL CHAR(6) NOT NULL CHAR(18) NOT NULL CHAR(17) NOT NULL CHAR(17) NOT NULL CHAR(17) NOT NULL Contents Sequence number of the table row, from 1 to n Operational status of DDF Location name of DDF Fully qualified LUNAME of DDF Fully qualified generic LUNAME of DDF IPV4 address of DDF
1319
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 347. Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = DDF) (continued) Column name IPV6ADDR TCPPORT RESPORT SQL_DOMAIN Data type CHAR(39) INTEGER NOT NULL INTEGER NOT NULL CHAR(45) NOT NULL CHAR(45) Contents IPV6 address of DDF. Always null. SQL listener port used by DDF Resync listener port used by DDF Domain name associated with the IP address in IPV4ADDR or IPV6ADDR Domain name associated with a specific member IP address Secure SQL listener TCP/IP port number. Always null. IPNAME used by DDF. Always null. An alias name value specified in the BSDS DDF record. TCP/IP port associated with ALIASNAME1 Secure TCP/IP port associated with ALIASNAME1. Always null. An alias name value specified in the BSDS DDF record TCP/IP port associated with ALIASNAME2 Secure TCP/IP port associated with ALIASNAME2. Always null. An alias name value specified in the BSDS DDF record TCP/IP port associated with ALIASNAME3 Secure TCP/IP port associated with ALIASNAME3. Always null. An alias name value specified in the BSDS DDF record TCP/IP port associated with ALIASNAME4 Secure TCP/IP port associated with ALIASNAME4. Always null. An alias name value specified in the BSDS DDF record
RSYNC_DOMAIN
ALIASNAME5
CHAR(18)
1320
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 347. Result set row for second ADMIN_COMMAND_DB2 result set (parse-type = DDF) (continued) Column name ALIASPORT5 ALIASSECPORT5 Data type INTEGER INTEGER Contents TCP/IP port associated with ALIASNAME5 Secure TCP/IP port associated with ALIASNAME5. Always null. An alias name value specified in the BSDS DDF record TCP/IP port associated with ALIASNAME6 Secure TCP/IP port associated with ALIASNAME6. Always null. An alias name value specified in the BSDS DDF record TCP/IP port associated with ALIASNAME7 Secure TCP/IP port associated with ALIASNAME7. Always null. An alias name value specified in the BSDS DDF record TCP/IP port associated with ALIASNAME8 Secure TCP/IP port associated with ALIASNAME8. Always null. IPV4 address associated with the specific member of a data sharing group IPV6 address associated with the specific member of a data sharing group. Always null.
MEMBER_IPV4ADDR
CHAR(17)
MEMBER_IPV6ADDR
CHAR(39)
Environment
ADMIN_COMMAND_DSN runs in a WLM-established stored procedures address space. TCB=1 is also required.
1321
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges: v The EXECUTE privilege on the ADMIN_COMMAND_DSN stored procedure v Ownership of the stored procedure v SYSADM authority To execute the DSN subcommand, you must use a privilege set that includes the authorization to execute the DSN subcommand as described in the DB2 Command Reference.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
Option descriptions
DSN-subcommand Specifies the DSN subcommand to be executed. If the DSN subcommand passed to the stored procedure is not BIND, REBIND, or FREE, an error message is returned. The DSN subcommand is performed using the authorization ID of the user who invoked the stored procedure. This is an input parameter of type VARCHAR(32704) and cannot be null. message Contains messages if an error occurs during stored procedure execution. A blank message does not mean that the DSN subcommand completed successfully. The calling application must read the result set to determine if the DSN subcommand was successful or not. This is an output parameter of type VARCHAR(1331).
Example
The following C language sample shows how to invoke ADMIN_COMMAND_DSN:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_COMMAND_DSN parameters char subcmd[32705]; /* BIND, REBIND or FREE DSN */ */
1322
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
char
errmsg[1332];
*/ */ */ */ */ */ */
/* Result set locators volatile SQL TYPE IS RESULT_SET_LOCATOR *rs_loc1; /* Result set row long int rownum; char text[256]; EXEC SQL END DECLARE SECTION; /* Sequence number of the /* table row /* DSN subcommand output row
/******************************************************************/ /* Set input parameter to execute a REBIND PLAN DSN subcommand */ /******************************************************************/ strcpy(subcmd, "REBIND PLAN (DSNACCOB) FLAG(W)"); /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_COMMAND_DSN */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_COMMAND_DSN (:subcmd, :errmsg); /******************************************************************/ /* Retrieve result set when the SQLCODE from the call is +446, */ /* which indicates that result sets were returned */ /******************************************************************/ if (SQLCODE == +466) /* Result sets were returned */ { /* Establish a link between the result set and its locator */ EXEC SQL ASSOCIATE LOCATORS (:rs_loc1) WITH PROCEDURE SYSPROC.ADMIN_COMMAND_DSN; /* Associate a cursor with the result set EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :rs_loc1; /* Perform fetches using C1 to retrieve all rows from the /* result set EXEC SQL FETCH C1 INTO :rownum, :text; while(SQLCODE==0) { EXEC SQL FETCH C1 INTO :rownum, :text; } } return; } */ */ */
Output
This stored procedure returns an error message, message, if an error occurs. The stored procedure returns one result set that contains the DSN sub-command output messages. The following table shows the format of the result set returned in the created global temporary table SYSIBM.DSN_SUBCMD_OUTPUT:
Table 348. Result set row for ADMIN_COMMAND_DSN result set Column name ROWNUM TEXT Data type INTEGER VARCHAR(255) Contents Sequence number of the table row, from 1 to n DSN subcommand output message line
1323
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Environment
ADMIN_COMMAND_UNIX runs in a WLM-established stored procedures address space. The load module for ADMIN_COMMAND_UNIX, DSNADMCU, must be program controlled if the BPX.DAEMON.HFSCTL FACILITY class profile has not been set up. For information on how to define DSNADMCU to program control, see installation job DSNTIJRA.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v The EXECUTE privilege on the package for DSNADMCU v Ownership of the package v PACKADM authority for the package collection v SYSADM authority The user specified in the user-ID input parameter of the SQL CALL statement must have the appropriate authority to execute the z/OS UNIX System Services command.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
CALL SYSPROC.ADMIN_COMMAND_UNIX ( user-ID, password, USS-command, output-layout OUTMODE=BLK NULL OUTMODE=LINE , return-code, message )
| | | | | | | | | |
Option descriptions
user-ID Specifies the user ID under which the z/OS UNIX System Services command is issued. This is an input parameter of type VARCHAR(128) and cannot be null. password Specifies the password associated with the input parameter user-ID.
1324
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The value of password is passed to the stored procedure as part of payload, and is not encrypted. It is not stored in dynamic cache when parameter markers are used. This is an input parameter of type VARCHAR(24) and cannot be null. USS-command Specifies the z/OS UNIX System Services command to be executed. This is an input parameter of type VARCHAR(32704) and cannot be null. output-layout Specifies how the output from the z/OS UNIX System Services command is returned. The output from the z/OS UNIX System Services command is a multi-line message. Possible values are: OUTMODE=LINE Each line is returned as a row in the result set. OUTMODE=BLK The lines are blocked into 32677 blocks and each block is returned as a row in the result set. If a null or empty string is provided, then the default option OUTMODE=BLK is used. This is an input parameter of type VARCHAR(1024). return-code Provides the return code from the stored procedure. Possible values are: 0 12 The call completed successfully. The call did not complete successfully. The message output parameter contains messages describing the error.
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. If no error occurred, then no message is returned. The first messages in this area are generated by the stored procedure. Messages that are generated by DB2 might follow the first messages. This is an output parameter of type VARCHAR(1331).
Example
The following C language sample shows how to invoke ADMIN_COMMAND_UNIX:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_COMMAND_UNIX parameters char userid[129]; /* User ID short int ind_userid; /* Indicator variable */ */ */
1325
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
char short int char short int char short int long int short int char short int
password[25]; ind_password; command[32705]; ind_command; layout[1025]; ind_layout; retcd; ind_retcd; errmsg[1332]; ind_errmsg;
/* /* /* /* /* /* /* /* /* /*
Password Indicator variable USS command Indicator variable Command output layout Indicator variable Return code Indicator variable Error message Indicator variable
*/ */ */ */ */ */ */ */ */ */ */
/* Result set locators volatile SQL TYPE IS RESULT_SET_LOCATOR *rs_loc1; /* Result set row long int rownum; char text[32678]; EXEC SQL END DECLARE SECTION;
/******************************************************************/ /* Assign values to input parameters to execute a USS command */ /* Set the indicator variables to 0 for non-null input parameters */ /* Set the indicator variables to -1 for null input parameters */ /******************************************************************/ strcpy(userid, "USRT001"); ind_userid = 0; strcpy(password, "N1CETEST"); ind_password = 0; strcpy(command, "ls"); ind_command = 0; ind_layout = -1; /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_COMMAND_UNIX */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_COMMAND_UNIX (:userid :ind_userid, :password :ind_password, :command :ind_command, :layout :ind_layout, :retcd :ind_retcd, :errmsg :ind_errmsg); /******************************************************************/ /* Retrieve result set when the SQLCODE from the call is +446, */ /* which indicates that result sets were returned */ /******************************************************************/ if (SQLCODE == +466) /* Result sets were returned */ { /* Establish a link between the result set and its locator */ EXEC SQL ASSOCIATE LOCATORS (:rs_loc1) WITH PROCEDURE SYSPROC.ADMIN_COMMAND_UNIX; /* Associate a cursor with the result set EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :rs_loc1; /* Perform fetches using C1 to retrieve all rows from the /* result set EXEC SQL FETCH C1 INTO :rownum, :text; while(SQLCODE==0) { EXEC SQL FETCH C1 INTO :rownum, :text; } */ */ */
1326
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
} return(retcd); }
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1324: v return-code v message In addition to the preceding output, the stored procedure returns one result set that contains the z/OS UNIX System Services command output messages. The following table shows the format of the result set returned in the created global temporary table SYSIBM.USS_CMD_OUTPUT:
Table 349. Result set row for ADMIN_COMMAND_UNIX result set Column name ROWNUM TEXT Data type INTEGER VARCHAR(32677) Contents Sequence number of the table row, from 1 to n A block of text or a line from the output messages of a z/OS UNIX System Services command
Environment
The load module for ADMIN_DS_BROWSE, DSNADMDB, must reside in an APF-authorized library. ADMIN_DS_BROWSE runs in a WLM-established stored procedures address space, and all libraries in this WLM procedure STEPLIB DD concatenation must be APF-authorized.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v The EXECUTE privilege on the package for DSNADMDB v Ownership of the package v PACKADM authority for the package collection v SYSADM authority The ADMIN_DS_BROWSE caller also needs authorization from an external security system, such as RACF, in order to browse or view an z/OS data set resource.
Appendix J. DB2-supplied stored procedures
1327
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
Option descriptions
data-type Specifies the type of data to be browsed. Possible values are: 1 2 Text data Binary data
This is an input parameter of type INTEGER and cannot be null. data-set-name Specifies the name of the data set, or of the library that contains the member to be browsed. Possible values are: PS data set name If reading from a PS data set, the data-set-name contains the name of the PS data set. PDS or PDSE name If reading from a member that belongs to this PDS or PDSE, the data-set-name contains the name of the PDS or PDSE. GDS name If reading from a generation data set, the data-set-name contains the name of the generation data set, such as USERGDG.FILE.G0001V00. This is an input parameter of type CHAR(44) and cannot be null. member-name Specifies the name of the PDS or PDSE member, if reading from a PDS or PDSE member. Otherwise, a blank character. This is an input parameter of type CHAR(8) and cannot be null. dump-option Specifies whether to use the DB2 standard dump facility to dump the information necessary for problem diagnosis when an SQL error occurred or when a call to the IBM routine IEFDB476 to get messages about an unsuccessful SVC 99 call failed. Possible values are: Y N Generate a dump. Do not generate a dump.
This is an input parameter of type CHAR(1) and cannot be null. return-code Provides the return code from the stored procedure. Possible values are:
1328
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
0 12
The call completed successfully. The call did not complete successfully. The message output parameter contains messages describing the error.
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. If no error occurred, then no message is returned. The first messages in this area are generated by the stored procedure. Messages that are generated by DB2 or by z/OS might follow the first messages. This is an output parameter of type VARCHAR(1331).
Example
The following C language sample shows how to invoke ADMIN_DS_BROWSE:
#include #include #include <stdio.h> <stdlib.h> <string>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_DS_BROWSE parameters long int datatype; /* char dsname[45]; /* char mbrname[9]; /* char dumpopt[2]; /* long int retcd; /* char errmsg[1332]; /* Data type Data set name Library member name Dump option Return code Error message */ */ */ */ */ */ */ */ */ */ */ */
/* Result set locators volatile SQL TYPE IS RESULT_SET_LOCATOR *rs_loc1; /* Result set row long int rownum; char text_rec[81]; EXEC SQL END DECLARE SECTION; /* Sequence number of the /* table row /* A data set record
/******************************************************************/ /* Assign values to input parameters to browse a library member */ /******************************************************************/ datatype = 1; strcpy(dsname, "USER.DATASET.PDS"); strcpy(mbrname, "MEMBER0A"); strcpy(dumpopt, "N"); /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_DS_BROWSE */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_DS_BROWSE (:datatype, :dsname, :mbrname, :dumpopt, :retcd, :errmsg); /******************************************************************/ /* Retrieve result set when the SQLCODE from the call is +446, */ /* which indicates that result sets were returned */
Appendix J. DB2-supplied stored procedures
1329
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
/******************************************************************/ if (SQLCODE == +466) /* Result sets were returned */ { /* Establish a link between the result set and its locator */ EXEC SQL ASSOCIATE LOCATORS (:rs_loc1) WITH PROCEDURE SYSPROC.ADMIN_DS_BROWSE; /* Associate a cursor with the result set EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :rs_loc1; /* Perform fetches using C1 to retrieve all rows from the /* result set EXEC SQL FETCH C1 INTO :rownum, :text_rec; while(SQLCODE==0) { EXEC SQL FETCH C1 INTO :rownum, :text_rec; } } return(retcd); } */ */ */
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1328: v return-code v message In addition to the preceding output, the stored procedure returns one result set that contains the text or binary records read. The following table shows the format of the result set returned in the created global temporary table SYSIBM.TEXT_REC_OUTPUT containing text records read:
Table 350. Result set row for ADMIN_DS_BROWSE result set (text records) Column name ROWNUM TEXT_REC Data type INTEGER VARCHAR(80) Contents Sequence number of the table row, from 1 to n. Record read (text format).
The following table shows the format of the result set returned in the created global temporary table SYSIBM.BIN_REC_OUTPUT containing binary records read:
Table 351. Result set row for ADMIN_DS_BROWSE result set (binary records) Column name ROWNUM BINARY_REC Data type INTEGER VARCHAR(80) FOR BIT DATA Contents Sequence number of the table row, from 1 to n. Record read (binary format).
1330
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Environment
The load module for ADMIN_DS_DELETE, DSNADMDD, must reside in an APF-authorized library. ADMIN_DS_DELETE runs in a WLM-established stored procedures address space, and all libraries in this WLM procedure STEPLIB DD concatenation must be APF-authorized.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges: v The EXECUTE privilege on the ADMIN_DS_DELETE stored procedure v Ownership of the stored procedure v SYSADM authority The ADMIN_DS_DELETE caller also needs authorization from an external security system, such as RACF, in order to delete an z/OS data set resource.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
CALL SYSPROC.ADMIN_DS_DELETE (
Option descriptions
data-set-type Specifies the type of data set to delete. Possible values are: 1 2 3 4 6 Partitioned data set (PDS) Partitioned data set extended (PDSE) Member of a PDS or PDSE Physical sequential data set (PS) Generation data set (GDS)
This is an input parameter of type INTEGER and cannot be null. data-set-name Specifies the name of the data set, library member, or GDS absolute generation number to be deleted. Possible values are: PS, PDS, or PDSE name If data-set-type is 1, 2, or 4, the data-set-name contains the name of the PS, PDS, or PDSE to be deleted. PDS or PDSE member name If data-set-type is 3, the data-set-name contains the name of the PDS or PDSE member to be deleted.
1331
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
absolute generation number If data-set-type is 6, the data-set-name contains the absolute generation number of the GDS to be deleted, such as G0001V00. This is an input parameter of type CHAR(44) and cannot be null. parent-data-set-name Specifies the name of the library that contains the member to be deleted, or of the GDG that contains the GDS to be delete. Otherwise blank. Possible values are: blank If data-set-type is 1, 2, or 4, the parent-data-set-name is left blank. PDS or PDSE name If data-set-type is 3, the parent-data-set-name contains the name of the PDS or PDSE whose member is to be deleted. GDG name If data-set-type is 6, the parent-data-set-name contains the name of the GDG that the GDS to be deleted belongs to. This is an input parameter of type CHAR(44) and cannot be null. dump-option Specifies whether to use the DB2 standard dump facility to dump the information necessary for problem diagnosis when a call to the IBM routine IEFDB476 to get messages about an unsuccessful SVC 99 call failed. Possible values are: Y N Generate a dump. Do not generate a dump.
This is an input parameter of type CHAR(1) and cannot be null. return-code Provides the return code from the stored procedure. Possible values are: 0 12 Data set, PDS member, PDSE member, or GDS was deleted successfully. The call did not complete successfully. The message output parameter contains messages describing the error.
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. If no error occurred, then no message is returned. The first messages in this area are generated by the stored procedure. Messages that are generated by z/OS might follow the first messages. This is an output parameter of type VARCHAR(1331).
Example
The following C language sample shows how to invoke ADMIN_DS_DELETE:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA;
1332
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_DS_DELETE parameters long int dstype; /* char dsname[45]; /* /* /* char parentds[45]; /* char dumpopt[2]; /* long int retcd; /* char errmsg[1332]; /* EXEC SQL END DECLARE SECTION; Data set type Data set name , member name, or generation # (G0001V00) PDS, PDSE, GDG or blank Dump option Return code Error message */ */ */ */ */ */ */ */ */
/******************************************************************/ /* Assign values to input parameters to delete a data set */ /******************************************************************/ dstype = 4; strcpy(dsname, "USER.DATASET.PDS"); strcpy(parentds, " "); strcpy(dumpopt, "N"); /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_DS_DELETE */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_DS_DELETE (:dstype, :dsname, :parentds, :dumpopt, :retcd, :errmsg); return(retcd); }
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1331: v return-code v message
Environment
The load module for ADMIN_DS_LIST, DSNADMDL, must reside in an APF-authorized library. ADMIN_DS_LIST runs in a WLM-established stored procedures address space, and all libraries in this WLM procedure STEPLIB DD concatenation must be APF-authorized.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v The EXECUTE privilege on the package for DSNADMDL
Appendix J. DB2-supplied stored procedures
1333
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v Ownership of the package v PACKADM authority for the package collection v SYSADM authority The ADMIN_DS_ LIST caller also needs authorization from an external security system, such as RACF, in order to perform the requested operation on an z/OS data set resource.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
Option descriptions
data-set-name Specifies the data set name. You can use masking characters. For example: USER.* If no masking characters are used, only one data set will be listed. This is an input parameter of type CHAR(44) and cannot be null. list-members Specifies whether to list PDS or PDSE members. Possible values are: Y N List members. Only set to Y when data-set-name is a fully qualified PDS or PDSE. Do not list members.
This is an input parameter of type CHAR(1) and cannot be null. list-generations Specifies whether to list generation data sets. Possible values are: Y N List generation data sets. Only set to Y when data-set-name is a fully qualified GDG. Do not list generation data sets.
This is an input parameter of type CHAR(1) and cannot be null. max-results Specifies the maximum number of result set rows. This option is applicable only when both list-members and list-generations are N. This is an input parameter of type INTEGER and cannot be null. dump-option Specifies whether to use the DB2 standard dump facility to dump the information necessary for problem diagnosis when any of the following errors occur: v SQL error.
1334
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v A call to the IBM routine IEFDB476 to get messages about an unsuccessful SVC 99 call failed. v Load Catalog Search Interface module error. Possible values are: Y N Generate a dump. Do not generate a dump.
This is an input parameter of type CHAR(1) and cannot be null. return-code Provides the return code from the stored procedure. Possible values are: 0 12 The call completed successfully. The call did not complete successfully. The message output parameter contains messages describing the error.
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. If no error occurred, then no message is returned. The first messages in this area are generated by the stored procedure. Messages that are generated by DB2 or by z/OS might follow the first messages. This is an output parameter of type VARCHAR(1331).
Example
The following C language sample shows how to invoke ADMIN_DS_LIST:
#pragma csect(CODE,"SAMDLPGM") #pragma csect(STATIC,"PGMDLSAM") #pragma runopts(plist(os)) #include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_DS_LIST parameters char dsname[45]; char listmbr[2]; char listgds[2]; long int maxresult; char dumpopt[2]; long int retcd; char errmsg[1332]; /* /* /* /* /* /* /* Data set name or filter List library members List GDS Maximum result set rows Dump option Return code Error message */ */ */ */ */ */ */ */ */
/* Result set locators volatile SQL TYPE IS RESULT_SET_LOCATOR *rs_loc1; /* Result set row char dsnamer[45];
1335
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
long long long char long long char long char char char EXEC
SQL
createyr; createday; type; volume[7]; primaryext; secondext; measure[10]; extinuse; dasduse[9]; harba[7]; hurba[7]; END DECLARE SECTION;
/* /* /* /* /* /* /* /* /* /* /*
Create year Create day Data set type Data set volume Size of first extent Size of secondary extent Extent unit of measurement Current allocated extents DASD usage High allocated RBA High used RBA
*/ */ */ */ */ */ */ */ */ */ */
char * ptr; int i = 0; /******************************************************************/ /* Assign values to input parameters to list all members of */ /* a library */ /******************************************************************/ strcpy(dsname, "USER.DATASET.PDS"); strcpy(listmbr, "Y"); strcpy(listgds, "N"); maxresult = 1; strcpy(dumpopt, "N"); /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_DS_LIST */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_DS_LIST (:dsname, :listmbr, :listgds, :maxresult, :dumpopt, :retcd, :errmsg); /******************************************************************/ /* Retrieve result set when the SQLCODE from the call is +446, */ /* which indicates that result sets were returned */ /******************************************************************/ if (SQLCODE == +466) /* Result sets were returned */ { /* Establish a link between the result set and its locator */ EXEC SQL ASSOCIATE LOCATORS (:rs_loc1) WITH PROCEDURE SYSPROC.ADMIN_DS_LIST; /* Associate a cursor with the result set EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :rs_loc1; /* Perform fetches using C1 to retrieve all rows from the /* result set EXEC SQL FETCH C1 INTO :dsnamer, :createyr, :createday, :type, :volume, :primaryext, :secondext, :measure, :extinuse, :dasduse, :harba, :hurba; while(SQLCODE==0) { EXEC SQL FETCH C1 INTO :dsnamer, :createyr, :createday, :type, :volume, :primaryext, :secondext, :measure, :extinuse, :dasduse, :harba, :hurba; } } return(retcd); } */ */ */
1336
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1334: v return-code v message In addition to the preceding output, the stored procedure returns one result set that contains the list of data sets, GDGs, PDS or PDSE members, or generation data sets that were requested. The following table shows the format of the result set returned in the created global temporary table SYSIBM.DSLIST:
Table 352. Result set row for ADMIN_DS_LIST result set Column name DSNAME Data type VARCHAR(44) Contents v Data set name, if list-members is N and list-generations is N. v Member name, if list-members is Y. v Absolute generation number (of the form G0000V00) from a generation data set name, if list-generations is Y. CREATE_YEAR INTEGER The year that the data set was created. Not applicable for member and VSAM cluster. The day of the year that the data set was created, as an integer in the range of 1 to 366 where 1 represents January 1). Not applicable for member and VSAM cluster.
CREATE_DAY
INTEGER
1337
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 352. Result set row for ADMIN_DS_LIST result set (continued) Column name TYPE Data type INTEGER Contents Type of data set. Possible values are: 0 1 2 3 4 5 6 8 9 10 VOLUME CHAR(6) Unknown type of data set PDS data set PDSE data set Member of PDS or PDSE Physical sequential data set Generation data group Generation data set VSAM cluster VSAM data component VSAM index component
Volume where data set resides. Not applicable for member and VSAM cluster. Size of first extent. Not applicable for member and VSAM cluster. Size of secondary extent. Not applicable for member and VSAM cluster. Unit of measurement for first extent and secondary extent. Possible values are: v BLOCKS v BYTES v CYLINDERS v KB v MB v TRACKS Not applicable for member and VSAM cluster.
PRIMARY_EXTENT
INTEGER
SECONDARY_EXTENT
INTEGER
MEASUREMENT_UNIT
CHAR(9)
EXTENTS_IN_USE
INTEGER
Current allocated extents. Not applicable for member and VSAM cluster. Disk usage. For VSAM data and VSAM index only. High allocated RBA. For VSAM data and VSAM index only. High used RBA. For VSAM data and VSAM index only.
DASD_USAGE HARBA
HURBA
1338
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a data set spans more than one volume, one row is returned for each volume that contains a piece of the data set. The VOLUME, EXTENTS_IN_USE, DASD_USAGE, HARBA, and HURBA columns reflect information for the specified volume.
Environment
The load module for ADMIN_DS_RENAME, DSNADMDR, must reside in an APF-authorized library. ADMIN_DS_RENAME runs in a WLM-established stored procedures address space, and all libraries in this WLM procedure STEPLIB DD concatenation must be APF-authorized.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges: v The EXECUTE privilege on the ADMIN_DS_RENAME stored procedure v Ownership of the stored procedure v SYSADM authority The ADMIN_DS_RENAME caller also needs authorization from an external security system, such as RACF, in order to rename an z/OS data set resource.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
CALL SYSPROC.ADMIN_DS_RENAME (
Option descriptions
data-set-type Specifies the type of data set to rename. Possible values are: 1 2 3 4 Partitioned data set (PDS) Partitioned data set extended (PDSE) Member of a PDS or PDSE Physical sequential data set (PS)
1339
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
data-set-name Specifies the data set or member to be renamed. Possible values are: PS, PDS, or PDSE name If data-set-type is 1, 2, or 4, the data-set-name contains the name of the PS, PDS, or PDSE to be renamed. PDS or PDSE member name If data-set-type is 3, the data-set-name contains the name of the PDS or PDSE member to be renamed. This is an input parameter of type CHAR(44) and cannot be null. parent-data-set-name Specifies the name of the PDS or PDSE, if renaming a PDS or PDSE member. Otherwise, a blank character. Possible values are: blank If data-set-type is 1, 2, or 4, the parent-data-set-name is left blank. PDS or PDSE name If data-set-type is 3, the parent-data-set-name contains the name of the PDS or PDSE whose member is to be renamed. This is an input parameter of type CHAR(44) and cannot be null. new-data-set-name Specifies the new data set or member name. Possible values are: new data set name If data-set-type is 1, 2, or 4, the new-data-set-name contains the new data set name. new member name If data-set-type is 3, the new-data-set-name contains the new member name. This is an input parameter of type CHAR(44) and cannot be null. dump-option Specifies whether to use the DB2 standard dump facility to dump the information necessary for problem diagnosis when any of the following errors occurred: v A call to the IBM routine IEFDB476 to get messages about an unsuccessful SVC 99 call failed. v Load IDCAMS program error. Possible values are: Y N Generate a dump. Do not generate a dump.
This is an input parameter of type CHAR(1) and cannot be null. return-code Provides the return code from the stored procedure. Possible values are: 0 12 The data set, PDS member, or PDSE member was renamed successfully. The call did not complete successfully. The message output parameter contains messages describing the error.
1340
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example
The following C language sample shows how to invoke ADMIN_DS_RENAME:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_DS_RENAME parameters long int dstype; /* char dsname[45]; /* char parentds[45]; /* /* char newdsname[45]; /* char dumpopt[2]; /* long int retcd; /* char errmsg[1332]; /* EXEC SQL END DECLARE SECTION; */ Data set type */ Data set or member name */ Parent data set (PDS or */ PDSE) name or blank */ New data set or member name*/ Dump option */ Return code */ Error message */
/******************************************************************/ /* Assign values to input parameters to rename a library member */ /******************************************************************/ dstype = 3; strcpy(dsname, "MEMBER01"); strcpy(parentds, "USER.DATASET.PDS"); strcpy(newdsname, "MEMBER0A"); strcpy(dumpopt, "N"); /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_DS_RENAME */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_DS_RENAME (:dstype, :dsname, :parentds, :newdsname,
1341
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
:errmsg);
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1339: v return-code v message
Environment
The load module for ADMIN_DS_SEARCH, DSNADMDE, must reside in an APF-authorized library. ADMIN_DS_SEARCH runs in a WLM-established stored procedures address space, and all libraries in this WLM procedure STEPLIB DD concatenation must be APF-authorized.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges: v The EXECUTE privilege on the ADMIN_DS_SEARCH stored procedure v Ownership of the stored procedure v SYSADM authority The ADMIN_DS_SEARCH caller also needs authorization from an external security system, such as RACF, in order to perform the requested operation on an z/OS data set resource.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
CALL SYSPROC.ADMIN_DS_SEARCH (
Option descriptions
data-set-name Specifies the name of a PS data set, PDS, PDSE, GDG or GDS.
1342
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
This is an input parameter of type CHAR(44) and cannot be null. member-name Specifies the name of a PDS or PDSE member. Set this parameter to a blank character if you only want to check the existence of the PDS or PDSE. This is an input parameter of type CHAR(8) and cannot be null. dump-option Specifies whether to use the DB2 standard dump facility to dump the information necessary for problem diagnosis when any of the following errors occurred: v A call to the IBM routine IEFDB476 to get messages about an unsuccessful SVC 99 call failed. v Load IDCAMS program error. Possible values are: Y N Generate a dump. Do not generate a dump.
This is an input parameter of type CHAR(1) and cannot be null. data-set-exists Indicates whether a data set or library member exists or not. Possible values are: -1 0 1 2 Call did not complete successfully. Unable to determine if data set or member exists. Data set or member was found Data set not found PDS or PDSE member not found
This is an output parameter of type INTEGER. return-code Provides the return code from the stored procedure. Possible values are: 0 12 The call completed successfully. The call did not complete successfully. The message output parameter contains messages describing the error.
This is an output parameter of type INTEGER. message Contains IDCAMS messages if return-code is 0. Otherwise, contains messages describing the error encountered by the stored procedure. The first messages are generated by the stored procedure and messages that are generated by z/OS might follow these first messages. This is an output parameter of type VARCHAR(1331).
Example
The following C language sample shows how to invoke ADMIN_DS_SEARCH:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA;
Appendix J. DB2-supplied stored procedures
1343
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_DS_SEARCH parameters char dsname[45]; /* char mbrname[9]; /* char dumpopt[2]; /* long int exist; /* /* long int retcd; /* char errmsg[1332]; /* EXEC SQL END DECLARE SECTION; Data set name or GDG Library member name Dump option Data set or library member existence indicator Return code Error message */ */ */ */ */ */ */ */
/******************************************************************/ /* Assign values to input parameters to determine whether a */ /* library member exists or not */ /******************************************************************/ strcpy(dsname, "USER.DATASET.PDS"); strcpy(mbrname, "MEMBER0A"); strcpy(dumpopt, "N"); /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_DS_SEARCH */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_DS_SEARCH (:dsname, :mbrname, :dumpopt, :exist, :retcd, :errmsg); return(retcd); }
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1342: v data-set-exists v return-code v message
Environment
The load module for ADMIN_DS_WRITE, DSNADMDW, must reside in an APF-authorized library. ADMIN_DS_WRITE runs in a WLM-established stored procedures address space, and all libraries in this WLM procedure STEPLIB DD concatenation must be APF-authorized.
1344
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v v v v The EXECUTE privilege on the package for DSNADMDW Ownership of the package PACKADM authority for the package collection SYSADM authority
The ADMIN_DS_WRITE caller also needs authorization from an external security system, such as RACF, in order to write to an z/OS data set resource.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
CALL SYSPROC.ADMIN_DS_WRITE (
Option descriptions
This stored procedure takes the following input options: data-type Specifies the type of data to be saved. Possible values are: 1 2 Text data Binary data
This is an input parameter of type INTEGER and cannot be null. data-set-name Specifies the name of the data set, GDG that contains the GDS, or library that contains the member, to be written to. Possible values are: PS data set name Name of the PS data set, if writing to a PS data set. GDG name Name of the GDG, if writing to a GDS within this GDG. PDS or PDSE name Name of the PDS or PDSE, if writing to a member that belongs to this library. This is an input parameter of type CHAR(44) and cannot be null. member-name Specifies the relative generation number of the GDS, if writing to a GDS, or the name of the PDS or PDSE member, if writing to a PDS or PDSE member. Otherwise, a blank character. Possible values are:
1345
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
GDS relative generation number Relative generation number of a GDS, if writing to a GDS. For example: -1, 0, +1 PDS or PDSE member name Name of the PDS or PDSE member, if writing to a library member. blank In all other cases, blank. This is an input parameter of type CHAR(8) and cannot be null. processing-option Specifies the type of operation. Possible values are: R A NM ND Replace Append New member New PS, PDS, PDSE, or GDS data set
This is an input parameter of type CHAR(2) and cannot be null. dump-option Specifies whether to use the DB2 standard dump facility to dump the information necessary for problem diagnosis when an SQL error has occurred or when a call to the IBM routine IEFDB476 to get messages about an unsuccessful SVC 99 call failed. Possible values are: Y N Generate a dump. Do not generate a dump.
This is an input parameter of type CHAR(1) and cannot be null. return-code Provides the return code from the stored procedure. Possible values are: 0 12 The call completed successfully. The call did not complete successfully. The message output parameter contains messages describing the error.
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. If no error occurred, then no message is returned. The first messages in this area are generated by the stored procedure. Messages that are generated by DB2 or by z/OS might follow the first messages. This is an output parameter of type VARCHAR(1331).
Additional input
In addition to the input parameters, the stored procedure reads records to be written to a file from a created global temporary table. If the data to be written is text data, then the stored procedure reads records from SYSIBM.TEXT_REC_INPUT. If the data is binary data, then the stored procedure reads records from the created global temporary table SYSIBM.BIN_REC_INPUT.
1346
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The following table shows the format of the created global temporary table SYSIBM.TEXT_REC_INPUT containing text records to be saved:
Table 353. Additional input for text data for the ADMIN_DS_WRITE stored procedure Column name ROWNUM TEXT_REC Data type INTEGER CHAR(80) Contents Sequence number of the table row, from 1 to n. Text record to be saved.
The following table shows the format of the created global temporary table SYSIBM.BIN_REC_INPUT containing binary records to be saved:
Table 354. Additional input for binary data for the ADMIN_DS_WRITE stored procedure Column name ROWNUM BINARY_REC Data type INTEGER VARCHAR(80) FOR BIT DATA Contents Sequence number of the table row, from 1 to n. Binary record to be saved.
Example
The following C language sample shows how to invoke ADMIN_DS_WRITE:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_DS_WRITE parameters long int datatype; /* Data type char dsname[45]; /* Data set name or GDG char mbrname[9]; /* Library member name, /* generation # (-1, 0, +1), /* or blank char procopt[3]; /* Processing option char dumpopt[2]; /* Dump option long int retcd; /* Return code char errmsg[1332]; /* Error message /* Temporary table SYSIBM.TEXT_REC_INPUT columns long int rownum; /* Sequence number of the /* table row char textrec[81]; /* Text record EXEC SQL END DECLARE SECTION; */ */ */ */ */ */ */ */ */ */ */ */ */ */
/******************************************************************/ /* Create the records to be saved */ /******************************************************************/ char dsrecord[12][50] = { "//IEBCOPY JOB ,CLASS=K,MSGCLASS=H,MSGLEVEL=(1,1)", "//STEP010 EXEC PGM=IEBCOPY", "//SYSPRINT DD SYSOUT=*", "//SYSUT3 DD SPACE=(TRK,(1,1)),UNIT=SYSDA",
Appendix J. DB2-supplied stored procedures
1347
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
"//SYSUT4 DD SPACE=(TRK,(1,1)),UNIT=SYSDA", "//*", "//DDI1 DD DSN=USER.DEV.LOADLIB1,DISP=SHR", "//DDO1 DD DSN=USER.DEV.LOADLIB2,DISP=SHR", "//SYSIN DD *", " COPY OUTDD=DDO1,INDD=DDI1", "/*", "//*" } ; int i = 0; /* Loop counter
*/
/******************************************************************/ /* Assign the values to input parameters to create a new */ /* partitioned data set and member */ /******************************************************************/ datatype = 1; strcpy(dsname, "USER.DATASET.PDS"); strcpy(mbrname, "MEMBER01"); strcpy(procopt, "ND"); strcpy(dumpopt, "N"); /******************************************************************/ /* Clear temporary table SYSIBM.TEXT_REC_INPUT */ /******************************************************************/ EXEC SQL DELETE FROM SYSIBM.TEXT_REC_INPUT; /******************************************************************/ /* Insert the records to be saved in the new library member */ /* into the temporary table SYSIBM.TEXT_REC_INPUT */ /******************************************************************/ for (i = 0; i < 12; i++) { rownum = i+1; strcpy(textrec, dsrecord[i]); EXEC SQL INSERT INTO SYSIBM.TEXT_REC_INPUT ( ROWNUM, TEXT_REC) VALUES (:rownum, :textrec); }; /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_DS_WRITE */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_DS_WRITE (:datatype, :dsname, :mbrname, :procopt, :dumpopt, :retcd, :errmsg ); return(retcd); }
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1345: v return-code v message
1348
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
)
Environment
ADMIN_INFO_HOST runs in a WLM-established stored procedures address space.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v v v v The EXECUTE privilege on the package for DSNADMIH Ownership of the package PACKADM authority for the package collection SYSADM authority
The ADMIN_INFO_HOST stored procedure internally calls the ADMIN_COMMAND_DB2 stored procedure to execute the following DB2 commands: v -DISPLAY DDF v -DISPLAY GROUP The owner of the package or plan that contains the CALL ADMIN_INFO_HOST statement must also have the authorization required to execute the stored procedure ADMIN_COMMAND_DB2 and the specified DB2 commands. To determine the privilege or authority required to issue a DB2 command, see DB2 Command Reference.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
CALL SYSPROC.ADMIN_INFO_HOST (
processing-option,
DB2-member NULL
, return-code, message
Option descriptions
processing-option Specifies processing option. Possible values are: 1 Return the host name of the connected DB2 subsystem or the host name of a specified DB2 data sharing group member. For a data sharing group member, you must specify DB2-member. 2 Return the host name of every DB2 member of the same data sharing group.
This is an input parameter of type INTEGER and cannot be null. DB2-member Specifies the DB2 data sharing group member name.
1349
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
This parameter must be null if processing-option is 2. This is an input parameter of type CHAR(8). return-code Provides the return code from the stored procedure. Possible values are: 0 4 The call completed successfully. Unable to list the host name of the connected DB2 subsystem or of every DB2 member of the same data sharing group due to one of the following reasons: v The IPADDR field returned when the -DISPLAY DDF command is executed on the connected DB2 subsystem or DB2 member contains the value -NONE v One of the DB2 members is down The call did not complete successfully. The message output parameter contains messages describing the error.
12
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. If no error occurred, then no message is returned. The first messages in this area are generated by the stored procedure. Messages that are generated by DB2 might follow the first messages. This is an output parameter of type VARCHAR(1331).
Example
The following C language sample shows how to invoke ADMIN_INFO_HOST:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_INFO_HOST parameters long int procopt; /* short int ind_procopt; /* char db2mbr[9]; /* /* short int ind_db2mbr; /* long int retcd; /* short int ind_retcd; /* char errmsg[1332]; /* short int ind_errmsg; /* Processing option Indicator variable Data sharing group member name Indicator variable Return code Indicator variable Error message Indicator variable */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */
/* Result set locators volatile SQL TYPE IS RESULT_SET_LOCATOR *rs_loc1; /* Result set row long int rownum; char db2member[9]; /* Sequence number of the /* table row /* DB2 data sharing group /* member name
1350
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
char
hostname[256];
EXEC SQL END DECLARE SECTION; /******************************************************************/ /* Assign values to input parameters to find the host name of */ /* the connected DB2 subsystem */ /* Set the indicator variables to 0 for non-null input parameters */ /* Set the indicator variables to -1 for null input parameters */ /******************************************************************/ procopt = 1; ind_procopt = 0; ind_db2mbr = -1; /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_INFO_HOST */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_INFO_HOST (:procopt :ind_procopt, :db2mbr :ind_db2mbr, :retcd :ind_retcd, :errmsg :ind_errmsg); /******************************************************************/ /* Retrieve result set when the SQLCODE from the call is +446, */ /* which indicates that result sets were returned */ /******************************************************************/ if (SQLCODE == +466) /* Result sets were returned */ { /* Establish a link between the result set and its locator */ EXEC SQL ASSOCIATE LOCATORS (:rs_loc1) WITH PROCEDURE SYSPROC.ADMIN_INFO_HOST; /* Associate a cursor with the result set EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :rs_loc1; /* Use C1 to fetch the only row from the result set EXEC SQL FETCH C1 INTO :rownum, :db2mbr, :hostname; } return(retcd); } */ */
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1349: v return-code v message In addition to the preceding output, the stored procedure returns one result set that contains the host names. The following table shows the format of the result set returned in the created global temporary table SYSIBM.SYSTEM_HOSTNAME:
Table 355. Result set row for ADMIN_INFO_HOST result set Column name ROWNUM Data type INTEGER Contents Sequence number of the table row, from 1 to n.
1351
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 355. Result set row for ADMIN_INFO_HOST result set (continued) Column name DB2_MEMBER HOSTNAME Data type CHAR(8) VARCHAR(255) Contents DB2 data sharing group member name. Host name of the connected DB2 subsystem if the processing-option input parameter is 1 and the DB2-member input parameter is null. Otherwise, the host name of the DB2 member specified in the DB2_MEMBER column.
Environment
ADMIN_INFO_SSID must run in a WLM-established stored procedure address space.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges: v The EXECUTE privilege on the ADMIN_INFO_SSID stored procedure v Ownership of the stored procedure v SYSADM authority
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
CALL SYSPROC.ADMIN_INFO_SSID (
Option descriptions
subsystem-ID Identifies the subsystem ID of the connected DB2 subsystem This is an output parameter of type VARCHAR(4). return-code Provides the return code from the stored procedure. Possible values are: 0 The call completed successfully.
1352
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
12
The call did not complete successfully. The message output parameter contains messages describing the error.
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. If no error occurred, then no message is returned. This is an output parameter of type VARCHAR(1331).
Example
The following C language sample shows how to invoke ADMIN_INFO_SSID:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_INFO_SSID PARAMETERS char ssid[5]; /* DB2 subsystem identifier long int retcd; /* Return code char errmsg[1332]; /* Error message EXEC SQL END DECLARE SECTION; */ */ */ */
/******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_INFO_SSID */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_INFO_SSID (:ssid, :retcd, :errmsg); return(retcd); }
Output
The output of this stored procedure is the following output parameters, which are described in Option descriptions on page 1352: v subsystem-ID v return-code v message
1353
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
This stored procedure functions the same as the SYSPROC.DSNWZP stored procedure, except that the ADMIN_INFO_SYSPARM stored procedure returns IRLM parameters, accepts a DB2 member as input, and returns the parameter settings in a result set.
Environment
ADMIN_INFO_SYSPARM runs in a WLM-established stored procedures address space, where NUMTCB=1 is required.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v v v v The EXECUTE privilege on the package for DSNADMIZ Ownership of the package PACKADM authority for the package collection SYSADM authority
The user who calls this stored procedure must have MONITOR1 privilege.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
CALL SYSPROC.ADMIN_INFO_SYSPARM (
DB2-member NULL
, return-code, message )
Option descriptions
DB2-member Specifies the name of the DB2 data sharing group member that you want to get the system parameters, DSNHDECP values, and IRLM parameters from. Specify NULL for this parameter if you are retrieving the system parameters, DSNHDECP values, and IRLM parameters from the connected DB2 subsystem. This is an input parameter of type VARCHAR(8). return-code Provides the return code from the stored procedure. The following values are possible: 0 12 The call completed successfully. The call did not complete successfully. The message output parameter contains messages that describe the IFI error or SQL error that is encountered by the stored procedure.
1354
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
message Contains messages that describe the IFI error or SQL error that was encountered by the stored procedure. If an error did not occur, a message is not returned. This is an output parameter of type VARCHAR(1331).
Example
The following C language sample shows how to invoke ADMIN_INFO_SYSPARM:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_INFO_SYSPARM parameters */ char db2_member[9]; /* Data sharing group member */ short int ind_db2_member; /* Indicator variable */ long int retcd; /* Return code */ short int ind_retcd; /* Indicator variable */ char errmsg[1332]; /* Error message */ short int ind_errmsg; /* Indicator variable */ /* Result set locators */ volatile SQL TYPE IS RESULT_SET_LOCATOR *rs_loc1; /* Result set row */ long int rownum; /* Sequence number of the */ /* table row (1,...,n) */ char macro[9]; /* Macro that contains the */ /* system parameter, or */ /* DSNHDECP parameter, or the */ /* name of the IRLM procedure */ /* that z/OS invokes if IRLM */ /* is automatically started */ /* by DB2 */ char parameter[41]; /* Name of the system */ /* parameter, DSNHDECP */ /* parameter, or IRLM */ /* parameter */ char install_panel[9]; /* Name of the installation */ /* panel where the parameter */ /* value can be changed when */ /* installing or migrating DB2*/ short int ind_install_panel; /* Indicator variable */ char install_field[41]; /* Name of the parameter on */ /* the installation panel */ short int ind_install_field; /* Indicator variable */ char install_location[13]; /* Location of the parameter */ /* on the installation panel */ short int ind_install_location; /* Indicator variable */ char value[2049]; /* Value of the parameter */ char additional_info[201]; /* Reserved for future use */ short int ind_additional_info; /* Indicator variable */ EXEC SQL END DECLARE SECTION; /******************************************************************/ /* Set the db2_member indicator variable to -1 to get the DB2 */ /* subsystem parameters, DSNHDECP values, and IRLM parameters of */ /* the connected DB2 subsystem. */
Appendix J. DB2-supplied stored procedures
1355
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
/******************************************************************/ ind_db2_member = -1; /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_INFO_SYSPARM */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_INFO_SYSPARM (:db2_member :ind_db2_member, :retcd :ind_retcd, :errmsg :ind_errmsg); /******************************************************************/ /* Retrieve result set when the SQLCODE from the call is +446, */ /* which indicates that result sets were returned */ /******************************************************************/ if (SQLCODE == +466) /* Result sets were returned */ { /* Establish a link between the result set and its locator */ EXEC SQL ASSOCIATE LOCATORS (:rs_loc1) WITH PROCEDURE SYSPROC.ADMIN_INFO_SYSPARM; /* Associate a cursor with the result set */ EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :rs_loc1; /* Perform fetches using C1 to retrieve all rows from the */ /* result set */ EXEC SQL FETCH C1 INTO :rownum, :macro, :parameter, :install_panel :ind_install_panel, :install_field :ind_install_field, :install_location :ind_install_location, :value, :additional_info :ind_additional_info; while(SQLCODE==0) { EXEC SQL FETCH C1 INTO :rownum, :macro, :parameter, :install_panel :ind_install_panel, :install_field :ind_install_field, :install_location :ind_install_location, :value, :additional_info :ind_additional_info; } } return(retcd); }
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1354: v return-code v message In addition to the preceding output, the stored procedure returns one result set that contains the parameter settings. The following table shows the format of the result set that is returned in the created global temporary table SYSIBM.DB2_SYSPARM:
Table 356. Result set row for ADMIN_INFO_SYSPARM result set Column name ROWNUM Data type INTEGER NOT NULL Contents Sequence number of the table row, from 1 to n.
1356
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 356. Result set row for ADMIN_INFO_SYSPARM result set (continued) Column name MACRO Data type VARCHAR(8) NOT NULL Contents Macro that contains the system parameter, the DSNHDECP parameter, or the name of the IRLM procedure that z/OS invokes if IRLM is started automatically by DB2. Name of the system parameter, DSNHDECP parameter, or IRLM parameter. Name of the installation panel where the parameter value can be changed when installing or migrating DB2. Name of the parameter on the installation panel. Location of the parameter on the installation panel. The value of the parameter. Reserved for future use.
PARAMETER
INSTALL_PANEL
VARCHAR(8)
Environment
The load module for ADMIN_JOB_CANCEL, DSNADMJP, must reside in an APF-authorized library. ADMIN_JOB_CANCEL runs in a WLM-established stored procedures address space, and all libraries in this WLM procedure STEPLIB DD concatenation must be APF-authorized. The load module for ADMIN_JOB_CANCEL, DSNADMJP, must be program controlled if the BPX.DAEMON.HFSCTL FACILITY class profile has not been set up. For information on how to define DSNADMJP to program control, see installation job DSNTIJRA.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges: v The EXECUTE privilege on the ADMIN_JOB_CANCEL stored procedure v Ownership of the stored procedure v SYSADM authority The user specified in the user-ID input parameter of the SQL CALL statement also needs authorization from an external security system, such as RACF, in order to perform the requested operation.
1357
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
Option descriptions
user-ID Specifies the user ID under which the job is canceled or purged. This is an input parameter of type VARCHAR(128) and cannot be null. password Specifies the password associated with the input parameter user-ID. The value of password is passed to the stored procedure as part of payload, and is not encrypted. It is not stored in dynamic cache when parameter markers are used. This is an input parameter of type VARCHAR(24) and cannot be null. processing-option Identifies the type of command to invoke. Possible values are: 1 2 Cancel a job. Purge a job.
This is an input parameter of type INTEGER and cannot be null. job-ID Specifies the job ID of the job to be canceled or purged. Acceptable formats are: v Jnnnnnnn v JOBnnnnn where n is a digit between 0 and 9. For example: JOB01035 Both Jnnnnnnn and JOBnnnnn must be exactly 8 characters in length. This is an input parameter of type CHAR(8) and cannot be null. return-code Provides the return code from the stored procedure. Possible values are: 0 12 The call completed successfully. The call did not complete successfully. The message output parameter contains messages describing the error.
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. If no error occurred, then no message is returned. The first messages in this area are generated by the stored procedure. Messages that are generated by z/OS might follow the first messages.
1358
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example
The following C language sample shows how to invoke ADMIN_JOB_CANCEL:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_JOB_CANCEL parameters char userid[129]; /* short int ind_userid; /* char password[25]; /* short int ind_password; /* long int procopt; /* short int ind_procopt; /* char jobid[9]; /* short int ind_jobid; /* long int retcd; /* short int ind_retcd; /* char errmsg[1332]; /* short int ind_errmsg; /* EXEC SQL END DECLARE SECTION; User ID Indicator variable Password Indicator variable Processing option Indicator variable Job ID Indicator variable Return code Indicator variable Error message Indicator variable */ */ */ */ */ */ */ */ */ */ */ */ */
/******************************************************************/ /* Assign values to input parameters to purge a job */ /* Set the indicator variables to 0 for non-null input parameters */ /******************************************************************/ strcpy(userid, "USRT001"); ind_userid = 0; strcpy(password, "N1CETEST"); ind_password = 0; procopt = 2; ind_procopt = 0; strcpy(jobid, "JOB00105"); ind_jobid = 0; /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_JOB_CANCEL */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_JOB_CANCEL (:userid :ind_userid, :password :ind_password, :procopt :ind_procopt, :jobid :ind_jobid, :retcd :ind_retcd, :errmsg :ind_errmsg); return(retcd); }
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1358: v return-code
Appendix J. DB2-supplied stored procedures
1359
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v message
Environment
The load module for ADMIN_JOB_FETCH, DSNADMJF, must reside in an APF-authorized library. ADMIN_JOB_FETCH runs in a WLM-established stored procedures address space, and all libraries in this WLM procedure STEPLIB DD concatenation must be APF-authorized. The load module for ADMIN_JOB_FETCH, DSNADMJF, must be program controlled if the BPX.DAEMON.HFSCTL FACILITY class profile has not been set up. For information on how to define DSNADMJF to program control, see installation job DSNTIJRA.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v The EXECUTE privilege on the package for DSNADMJF v Ownership of the package v PACKADM authority for the package collection v SYSADM authority
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
CALL SYSPROC.ADMIN_JOB_FETCH (
Option descriptions
user-ID Specifies the user ID under which SYSOUT is retrieved. This is an input parameter of type VARCHAR(128) and cannot be null. password Specifies the password associated with the input parameter user-ID. The value of password is passed to the stored procedure as part of payload, and is not encrypted. It is not stored in dynamic cache when parameter markers are used. This is an input parameter of type VARCHAR(24) and cannot be null. job-ID Specifies the JES2 or JES3 job ID whose SYSOUT data sets are to be retrieved.
1360
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
This is an input parameter of type CHAR(8) and cannot be null. return-code Provides the return code from the stored procedure. Possible values are: 0 12 The call completed successfully. The call did not complete successfully. The message output parameter contains messages describing the error.
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. If no error occurred, then no message is returned. The first messages in this area are generated by the stored procedure. Messages that are generated by DB2 might follow the first messages. This is an output parameter of type VARCHAR(1331).
Example
The following C language sample shows how to invoke ADMIN_JOB_FETCH:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_JOB_FETCH parameters char userid[129]; /* short int ind_userid; /* char password[25]; /* short int ind_password; /* char jobid[9]; /* short int ind_jobid; /* long int retcd; /* short int ind_retcd; /* char errmsg[1332]; /* short int ind_errmsg; /* User ID Indicator variable Password Indicator variable Job ID Indicator variable Return code Indicator variable Error message Indicator variable */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */
/* Result set locators volatile SQL TYPE IS RESULT_SET_LOCATOR *rs_loc1; /* Result set row long int rownum; char text[4097]; EXEC SQL END DECLARE SECTION; /* Sequence number of the /* table row /* A row in SYSOUT data set
/******************************************************************/ /* Assign values to input parameters to fetch the SYSOUT of a job */ /* Set the indicator variables to 0 for non-null input parameters */ /******************************************************************/ strcpy(userid, "USRT001"); ind_userid = 0; strcpy(password, "N1CETEST"); ind_password = 0; strcpy(jobid, "JOB00100");
Appendix J. DB2-supplied stored procedures
1361
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
ind_jobid = 0; /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_JOB_FETCH */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_JOB_FETCH (:userid :ind_userid, :password :ind_password, :jobid :ind_jobid, :retcd :ind_retcd, :errmsg :ind_errmsg); /******************************************************************/ /* Retrieve result set when the SQLCODE from the call is +446, */ /* which indicates that result sets were returned */ /******************************************************************/ if (SQLCODE == +466) /* Result sets were returned */ { /* Establish a link between the result set and its locator */ EXEC SQL ASSOCIATE LOCATORS (:rs_loc1) WITH PROCEDURE SYSPROC.ADMIN_JOB_FETCH; /* Associate a cursor with the result set EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :rs_loc1; /* Perform fetches using C1 to retrieve all rows from the /* result set EXEC SQL FETCH C1 INTO :rownum, :text; while(SQLCODE==0) { EXEC SQL FETCH C1 INTO :rownum, :text; } } return(retcd); } */ */ */
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1360: v return-code v message In addition to the preceding output, the stored procedure returns one result set that contains the data from the JES-managed SYSOUT data set that belong to the job ID specified in the input parameter job-ID. The following table shows the format of the result set returned in the created global temporary table SYSIBM.JES_SYSOUT:
Table 357. Result set row for ADMIN_JOB_FETCH result set Column name ROWNUM TEXT Data type INTEGER VARCHAR(4096) Contents Sequence number of the table row A record in the SYSOUT data set
1362
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Environment
The load module for ADMIN_JOB_QUERY, DSNADMJQ, must reside in an APF-authorized library. ADMIN_JOB_QUERY runs in a WLM-established stored procedures address space, and all libraries in this WLM procedure STEPLIB DD concatenation must be APF-authorized. The load module for ADMIN_JOB_QUERY, DSNADMJQ, must be program controlled if the BPX.DAEMON.HFSCTL FACILITY class profile has not been set up. For information on how to define DSNADMJQ to program control, see installation job DSNTIJRA.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges: v The EXECUTE privilege on the ADMIN_JOB_QUERY stored procedure v Ownership of the stored procedure v SYSADM authority
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
CALL SYSPROC.ADMIN_JOB_QUERY (
Option descriptions
user-ID Specifies the user ID under which the job is queried. This is an input parameter of type VARCHAR(128) and cannot be null. password Specifies the password associated with the input parameter user-ID. The value of password is passed to the stored procedure as part of payload, and is not encrypted. It is not stored in dynamic cache when parameter markers are used. This is an input parameter of type VARCHAR(24) and cannot be null. job-ID Specifies the job ID of the job being queried. Acceptable formats are: v Jnnnnnnn v JOBnnnnn
Appendix J. DB2-supplied stored procedures
1363
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
where n is a digit between 0 and 9. For example: JOB01035 Both Jnnnnnnn and JOBnnnnn must be exactly 8 characters in length. This is an input parameter of type CHAR(8) and cannot be null. status Identifies the current status of the job. Possible values are: 1 2 3 4 5 Job received, but not yet run (INPUT). Job running (ACTIVE). Job finished and has output to be printed or retrieved (OUTPUT). Job not found. Job in an unknown phase.
This is an output parameter of type INTEGER. max-RC Provides the job completion code. This parameter is always null if querying in a JES3 z/OS Version 1.7 or earlier system. For JES3, this feature is only supported for z/OS Version 1.8 or higher. This is an output parameter of type INTEGER. completion-type Identifies the jobs completion type. Possible values are: 0 1 2 3 4 5 6 7 8 No completion information is available. Job ended normally. Job ended by completion code. Job had a JCL error. Job was canceled. Job terminated abnormally. Converter terminated abnormally while processing the job. Job failed security checks. Job failed in end-of-memory .
This parameter is always null if querying in a JES3 z/OS Version 1.7 or earlier system. For JES3, this feature is only supported for z/OS Version 1.8 or higher. The completion-type information is the last six bits in the field STTRMXRC of the IAZSSST mapping macro. This information is returned via SSI 80. For additional information, see the discussion of the SSST macro in z/OS MVS Data Areas. This is an output parameter of type INTEGER. system-abend-code Returns the system abend code if an abnormal termination occurs. This parameter is always null if querying in a JES3 z/OS Version 1.7 or earlier system. For JES3, this feature is only supported for z/OS Version 1.8 or higher. This is an output parameter of type INTEGER. user-abend-code Returns the user abend code if an abnormal termination occurs.
1364
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
This parameter is always null if querying in a JES3 z/OS Version 1.7 or earlier system. For JES3, this feature is only supported for z/OS Version 1.8 or higher. This is an output parameter of type INTEGER. return-code Provides the return code from the stored procedure. Possible values are: 0 4 12 The call completed successfully. The job was not found, or the job status is unknown. The call did not complete successfully. The message output parameter contains messages describing the error.
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. If no error occurred, then no message is returned. This is an output parameter of type VARCHAR(1331).
Example
The following C language sample shows how to invoke ADMIN_JOB_QUERY:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_JOB_QUERY parameters char userid[129]; /* short int ind_userid; /* char password[25]; /* short int ind_password; /* char jobid[9]; /* short int ind_jobid; /* long int stat; /* short int ind_stat; /* long int maxrc; /* short int ind_maxrc; /* long int comptype; /* short int ind_comptype; /* long int sabndcd; /* short int ind_sabndcd; /* long int uabndcd; /* short int ind_uabndcd; /* long int retcd; /* short int ind_retcd; /* char errmsg[1332]; /* short int ind_errmsg; /* EXEC SQL END DECLARE SECTION; User ID Indicator variable Password Indicator variable Job ID Indicator variable Job status Indicator variable Job maxcc Indicator variable Job completion type Indicator variable System abend code Indicator variable User abend code Indicator variable Return code Indicator variable Error message Indicator variable */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */
/******************************************************************/ /* Assign values to input parameters to query the status and */ /* completion code of a job */ /* Set the indicator variables to 0 for non-null input parameters */ /******************************************************************/
Appendix J. DB2-supplied stored procedures
1365
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
strcpy(userid, "USRT001"); ind_userid = 0; strcpy(password, "N1CETEST"); ind_password = 0; strcpy(jobid, "JOB00111"); ind_jobid = 0; /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_JOB_QUERY */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_JOB_QUERY (:userid :ind_userid, :password :ind_password, :jobid :ind_jobid, :stat :ind_stat, :maxrc :ind_maxrc, :comptype :ind_comptype, :sabndcd :ind_sabndcd, :uabndcd :ind_uabndcd, :retcd :ind_retcd, :errmsg :ind_errmsg); return(retcd); }
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1363: v v v v v status max-RC completion-type system-abend-code user-abend-code
v return-code v message
Environment
ADMIN_JOB_SUBMIT runs in a WLM-established stored procedures address space. The load module for ADMIN_JOB_SUBMIT, DSNADMJS, must be program controlled if the BPX.DAEMON.HFSCTL FACILITY class profile has not been set up. For information on how to define DSNADMJS to program control, see installation job DSNTIJRA.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v The EXECUTE privilege on the package for DSNADMJS
1366
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v Ownership of the package v PACKADM authority for the package collection v SYSADM authority
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
Option descriptions
user-ID Specifies the user ID under which the job is submitted. This is an input parameter of type VARCHAR(128) and cannot be null. password Specifies the password associated with the input parameter user-ID. The value of password is passed to the stored procedure as part of payload, and is not encrypted. It is not stored in dynamic cache when parameter markers are used. This is an input parameter of type VARCHAR(24) and cannot be null. job-ID Identifies the JES2 or JES3 job ID of the submitted job. This is an output parameter of type CHAR(8). return-code Provides the return code from the stored procedure. Possible values are: 0 12 The call completed successfully. The call did not complete successfully. The message output parameter contains messages describing the error.
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. If no error occurred, then no message is returned. The first messages in this area are generated by the stored procedure. Messages that are generated by DB2 might follow the first messages. This is an output parameter of type VARCHAR(1331).
Additional input
In addition to the input parameters, the stored procedure submits the jobs JCL from the created global temporary table SYSIBM.JOB_JCL for execution. The following table shows the format of the created global temporary table SYSIBM.JOB_JCL:
Appendix J. DB2-supplied stored procedures
1367
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 358. Additional input for the ADMIN_JOB_SUBMIT stored procedure Column name ROWNUM STMT Data type INTEGER VARCHAR(80) Contents Sequence number of the table row, from 1 to n A JCL statement
Example
The following C language sample shows how to invoke ADMIN_JOB_SUBMIT:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_JOB_SUBMIT parameters char userid[129]; /* User ID short int ind_userid; /* Indicator variable char password[25]; /* Password short int ind_password; /* Indicator variable char jobid[9]; /* Job ID short int ind_jobid; /* Indicator variable long int retcd; /* Return code short int ind_retcd; /* Indicator variable char errmsg[1332]; /* Error message short int ind_errmsg; /* Indicator variable /* Temporary table SYSIBM.JOB_JCL columns long int rownum; /* Sequence number of the /* table row char stmt[81]; /* JCL statement EXEC SQL END DECLARE SECTION; */ */ */ */ */ */ */ */ */ */ */ */ */ */ */
/******************************************************************/ /* Create the JCL job to be submitted for execution */ /******************************************************************/ char jclstmt[12][50] = { "//IEBCOPY JOB ,CLASS=K,MSGCLASS=H,MSGLEVEL=(1,1)", "//STEP010 EXEC PGM=IEBCOPY", "//SYSPRINT DD SYSOUT=*", "//SYSUT3 DD SPACE=(TRK,(1,1)),UNIT=SYSDA", "//SYSUT4 DD SPACE=(TRK,(1,1)),UNIT=SYSDA", "//*", "//DDI1 DD DSN=USER.DEV.LOADLIB1,DISP=SHR", "//DDO1 DD DSN=USER.DEV.LOADLIB2,DISP=SHR", "//SYSIN DD *", " COPY OUTDD=DDO1,INDD=DDI1", "/*", "//*" } ; int i = 0; /* loop counter */ /******************************************************************/ /* Assign values to input parameters */ /* Set the indicator variables to 0 for non-null input parameters */ /******************************************************************/ strcpy(userid, "USRT001");
1368
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
ind_userid = 0; strcpy(password, "N1CETEST"); ind_password = 0; /******************************************************************/ /* Clear temporary table SYSIBM.JOB_JCL */ /******************************************************************/ EXEC SQL DELETE FROM SYSIBM.JOB_JCL; /******************************************************************/ /* Insert the JCL job into the temporary table SYSIBM.JOB_JCL */ /******************************************************************/ for (i = 0; i < 12; i++) { rownum = i+1; strcpy(stmt, jclstmt[i]); EXEC SQL INSERT INTO SYSIBM.JOB_JCL ( ROWNUM, STMT) VALUES (:rownum, :stmt); }; /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_JOB_SUBMIT */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_JOB_SUBMIT (:userid :ind_userid, :password :ind_password, :jobid :ind_jobid, :retcd :ind_retcd, :errmsg :ind_errmsg); return(retcd); }
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1367: v job-ID v return-code v message
Environment
ADMIN_UTL_SCHEDULE runs in a WLM-established stored procedures address space.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v The EXECUTE privilege on the package for DSNADMUM v Ownership of the package v PACKADM authority for the package collection
Appendix J. DB2-supplied stored procedures
1369
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v SYSADM authority The ADMIN_UTL_SCHEDULE stored procedure internally calls the following stored procedures: v ADMIN_COMMAND_DB2, to execute the DB2 DISPLAY UTILITY command v ADMIN_INFO_SSID, to obtain the subsystem ID of the connected DB2 subsystem v ADMIN_UTL_SORT, to sort objects into parallel execution units v DSNUTILU, to run the requested utilities The owner of the package or plan that contains the CALL ADMIN_UTL_SCHEDULE statement must also have the authorization required to execute these stored procedures and run the requested utilities. To determine the privilege or authority required to call DSNUTILU, see DB2 Utility Guide and Reference.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
optimize-workload NULL
stop-condition NULL
, utility-ID-stem,
shutdown-duration NULL
, number-of-objects,
Option descriptions
max-parallel Specifies the maximum number of parallel threads that may be started. The actual number may be lower than the requested number based on the optimizing sort result. Possible values are 1 to 99. This is an input parameter of type SMALLINT and cannot be null. optimize-workload Specifies whether the parallel utility executions should be sorted to achieve shortest overall execution time. Possible values are: NO or null The workload is not to be sorted. YES The workload is to be sorted.
This is an input parameter of type VARCHAR(8). The default value is NO. stop-condition Specifies the utility execution condition after which ADMIN_UTL_SCHEDULE will not continue starting new utility executions in parallel, but will wait until all currently running utilities have completed and will then return to the caller. Possible values are:
1370
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
AUTHORIZ or null No new utility executions will be started after one of the currently running utilities has encountered a return code from DSNUTILU of 12 or higher. WARNING No new utility executions will be started after one of the currently running utilities has encountered a return code from DSNUTILU of 4 or higher. ERROR No new utility executions will be started after one of the currently running utilities has encountered a return code from DSNUTILU of 8 or higher. This is an input parameter of type VARCHAR(8). The default value is AUTHORIZ. utility-ID-stem Specifies the first part of the utility ID of a utility execution in a parallel thread. The complete utility ID is dynamically created in the form utility-ID-stem followed by TT followed by NNNNNN, where: TT The zero-padded number of the subtask executing the utility
NNNNNN A consecutive number of utilities executed in a subtask. For example, utilityidstem02000005 is the fifth utility execution that has been processed by the second subtask. This is an input parameter of type VARCHAR(8) and cannot be null. shutdown-duration Specifies the number of seconds that ADMIN_UTL_SCHEDULE will wait for a utility execution to complete before a shutdown is initiated. When a shutdown is initiated, current utility executions can run to completion, and no new utility will be started. Possible values are: null A shutdown will not be performed
1 to 999999999999999 A shutdown will be performed after this many seconds This is an input parameter of type FLOAT(8). The default value is null. number-of-objects As an input parameter, this specifies the number of utility executions and their sorting objects that were passed in the SYSIBM.UTILITY_OBJECTS table. Possible values are 1 to 999999. As an output parameter, this specifies the number of objects that were passed in SYSIBM.UTILITY_OBJECTS table that are found in the DB2 catalog. This is an input and output parameter of type INTEGER and cannot be null. utilities-run Indicates the number of actual utility executions. This is an output parameter of type INTEGER. highest-return-code Indicates the highest return code from DSNUTILU for all utility executions. This is an output parameter of type INTEGER.
Appendix J. DB2-supplied stored procedures
1371
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
parallel-tasks Indicates the actual number of parallel tasks that were started to execute the utility in parallel. This is an output parameter of type SMALLINT. return-code Provides the return code from the stored procedure. Possible values are: 0 4 12 All parallel utility executions ran successfully. The statistics for one or more sorting objects have not been gathered in the catalog. An ADMIN_UTL_SCHEDULE error occurred or all the objects passed in the SYSIBM.UTILITY_OBJECTS table are not found in the DB2 catalog. The message parameter contains details.
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. If no error occurred, then no message is returned. The first messages in this area are generated by the stored procedure. Messages that are generated by DB2 might follow the first messages. This is an output parameter of type VARCHAR(1331).
Additional input
In addition to the input parameters, the stored procedure reads from the created global temporary tables SYSIBM.UTILITY_OBJECTS and SYSIBM.UTILITY_STMT. The stored procedure reads objects for utility execution from SYSIBM.UTILITY_OBJECTS. The following table shows the format of the created global temporary table SYSIBM.UTILITY_OBJECTS:
Table 359. Format of the input objects Column name OBJECTID Data type INTEGER Contents A unique positive identifier for the object the utility execution is associated with. When you insert multiple rows, increment OBJECTID by 1, starting at 0 for every insert. A statement row in SYSIBM.UTILITY_STMT Object type: v TABLESPACE v INDEXSPACE v TABLE v INDEX v STOGROUP
STMTID TYPE
INTEGER VARCHAR(10)
1372
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | |
Table 359. Format of the input objects (continued) Column name QUALIFIER Data type VARCHAR(128) Contents Qualifier (database or creator) of the object in NAME, empty or null for STOGROUP. If the qualifier is not provided and the type of the object is TABLESPACE or INDEXSPACE, then the default database is DSNDB04. If the object is of the type TABLE or INDEX, the schema is the current SQL authorization ID. Unqualified name of the object. NAME cannot be null. If the object no longer exists, it will be ignored and the corresponding utility will not be executed. Partition number of the object for which the utility will be invoked. Null or 0 if the object is not partitioned. Restart parameter of DSNUTILU
NAME
VARCHAR(128)
PART
SMALLINT
RESTART
VARCHAR(8)
1373
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 359. Format of the input objects (continued) Column name UTILITY_NAME Data type VARCHAR(20) Contents Utility name. UTILITY_NAME cannot be null. Recommendation: Sort objects for the same utility. Possible values are: v CHECK DATA v CHECK INDEX v CHECK LOB v COPY v COPYTOCOPY v DIAGNOSE v LOAD v MERGECOPY v MODIFY RECOVERY v MODIFY STATISTICS v QUIESCE v REBUILD INDEX v RECOVER v REORG INDEX v REORG LOB v REORG TABLESPACE v REPAIR v REPORT RECOVERY v REPORT TABLESPACESET v RUNSTATS INDEX v RUNSTATS TABLESPACE v STOSPACE v UNLOAD
The stored procedure reads the corresponding utility statements from SYSIBM.UTILITY_STMT. The following table shows the format of the created global temporary table SYSIBM.UTILITY_STMT:
Table 360. Format of the utility statements Column name STMTID Data type INTEGER Contents A unique positive identifier for a single utility execution statement
1374
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 360. Format of the utility statements (continued) Column name STMTSEQ Data type INTEGER Contents If a utility statement exceeds 4000 characters, it can be split up and inserted into SYSIBM.UTILITY_STMT with the sequence starting at 0, and then being incremented with every insert. During the actual execution, the statement pieces are concatenated without any separation characters or blanks in between. A utility statement or part of a utility statement. A placeholder &OBJECT. can be used to be replaced by the object name passed in SYSIBM.UTILITY_OBJECTS. A placeholder &THDINDEX. can be used to be replaced by the current thread index (01-99) of the utility being executed. You can use this when running REORG with SHRLEVEL CHANGE in parallel, so that you can specify a different mapping table for each thread of the utility execution.
UTSTMT
VARCHAR(4000)
Example
The following C language sample shows how to invoke ADMIN_UTL_SCHEDULE:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_UTL_SCHEDULE parameters short int maxparallel; /* Max parallel short int ind_maxparallel; /* Indicator variable char optimizeworkload[9]; /* Optimize workload short int ind_optimizeworkload; /* Indicator variable char stoponcond[9]; /* Stop on condition short int ind_stoponcond; /* Indicator variable char utilityidstem[9]; /* Utility ID stem short int ind_utilityidstem; /* Indicator variable float shutdownduration; /* Shutdown duration short int ind_shutdownduration; /* Indicator variable long int numberofobjects; /* Number of objects */ */ */ */ */ */ */ */ */ */ */ */
1375
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
short int ind_numberofobjects; long int utilitiesexec; short int ind_utilitiesexec; long int highestretcd; short int ind_highestretcd; long int paralleltasks; short int ind_paralleltasks; long int retcd; short int ind_retcd; char errmsg[1332]; short int ind_errmsg;
/* /* /* /* /* /* /* /* /* /* /*
Indicator variable */ Utilities executed */ Indicator variable */ DSNUTILU highest ret code */ Indicator variable */ Parallel tasks */ Indicator variable */ Return code */ Indicator variable */ Error message */ Indicator variable */
/* Temporary table SYSIBM.UTILITY_OBJECTS columns */ long int objectid; /* Object id */ long int stmtid; /* Statement ID */ char type[11]; /* Object type (e.g. "INDEX") */ char qualifier[129]; /* Object qualifier */ short int ind_qualifier; /* Object qualifier ind. var. */ char name[129]; /* Object name (qual. or unq.)*/ short int part; /* Optional partition */ short int ind_part; /* Partition indicator var */ char restart[9]; /* DSNUTILU restart parm */ char utname[21]; /* Utility name */ /* Temporary table SYSIBM.UTILITY_STMT columns long int stmtid2; /* Statement ID long int stmtseq; /* Utility stmt sequence char utstmt[4001]; /* Utility statement /* Result set locators volatile SQL TYPE IS RESULT_SET_LOCATOR *rs_loc1; volatile SQL TYPE IS RESULT_SET_LOCATOR *rs_loc2; /* First result set row long int objectid1; long int textseq; char text[255]; /* Second result set row long int objectid2; long int utilretcd; EXEC SQL END DECLARE SECTION; /* Object id /* Object utility output seq /* Object utility output /* Object id /* DSNUTILU return code */ */ */ */ */
*/ */ */ */ */ */ */
/******************************************************************/ /* Set up the objects to be sorted */ /******************************************************************/ long int objid_array[4] = {1, 2, 3, 4}; long int stmtid_array[4] = {1, 1, 1, 1}; char type_array[4][11] = {"TABLESPACE", "TABLESPACE", "TABLESPACE", "TABLESPACE"}; char qual_array[4][129] = {"QUAL01", "QUAL01", "QUAL01", "QUAL01"}; char name_array[4][129] = {"TBSP01", "TBSP02", "TBSP03", "TBSP04"}; short int part_array[4] = {0, 0, 0, 0}; char restart_array[4][9] = {"NO", "NO", "NO", "NO"}; char utname_array[4][21]= {"RUNSTATS TABLESPACE", "RUNSTATS TABLESPACE", "RUNSTATS TABLESPACE", "RUNSTATS TABLESPACE"}; int i = 0; /* Loop counter */
1376
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
stmtid2 = 1; stmtseq = 1; strcpy(utstmt, "RUNSTATS TABLESPACE &OBJECT. TABLE(ALL) SAMPLE 25 INDEX(ALL)"); /******************************************************************/ /* Assign values to input parameters */ /* Set the indicator variables to 0 for non-null input parameters */ /* Set the indicator variables to -1 for null input parameters */ /******************************************************************/ maxparallel = 2; ind_maxparallel = 0; strcpy(optimizeworkload, "YES"); ind_optimizeworkload = 0; strcpy(stoponcond, "AUTHORIZ"); ind_stoponcond = 0; strcpy(utilityidstem, "DSNADMUM"); ind_utilityidstem = 0; numberofobjects = 4; ind_numberofobjects = 0; ind_shutdownduration = -1; /******************************************************************/ /* Clear temporary table SYSIBM.UTILITY_OBJECTS */ /******************************************************************/ EXEC SQL DELETE FROM SYSIBM.UTILITY_OBJECTS; /******************************************************************/ /* Insert the objects into the temporary table */ /* SYSIBM.UTILITY_OBJECTS */ /******************************************************************/ for (i = 0; i < 4; i++) { objectid = objid_array[i]; stmtid = stmtid_array[i]; strcpy(type, type_array[i]); strcpy(qualifier, qual_array[i]); strcpy(name, name_array[i]); part = part_array[i]; strcpy(restart, restart_array[i]); strcpy(utname, utname_array[i]); EXEC SQL INSERT INTO SYSIBM.UTILITY_OBJECTS (OBJECTID, STMTID, TYPE, QUALIFIER, NAME, PART, RESTART, UTILITY_NAME) VALUES (:objectid, :stmtid, :type, :qualifier, :name, :part, :restart, :utname); }; /******************************************************************/ /* Clear temporary table SYSIBM.UTILITY_STMT */ /******************************************************************/ EXEC SQL DELETE FROM SYSIBM.UTILITY_STMT; /******************************************************************/ /* Insert the utility statement into the temporary table */ /* SYSIBM.UTILITY_STMT */ /******************************************************************/ EXEC SQL INSERT INTO SYSIBM.UTILITY_STMT (STMTID, STMTSEQ, UTSTMT) VALUES (:stmtid2, :stmtseq, :utstmt); /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_UTL_SCHEDULE */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_UTL_SCHEDULE
Appendix J. DB2-supplied stored procedures
1377
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
(:maxparallel :optimizeworkload :stoponcond :utilityidstem :shutdownduration :numberofobjects :utilitiesexec :highestretcd :paralleltasks :retcd :errmsg
:ind_maxparallel, :ind_optimizeworkload, :ind_stoponcond, :ind_utilityidstem, :ind_shutdownduration, :ind_numberofobjects, :ind_utilitiesexec, :ind_highestretcd, :ind_paralleltasks, :ind_retcd, :ind_errmsg);
/******************************************************************/ /* Retrieve result set when the SQLCODE from the call is +446, */ /* which indicates that result sets were returned */ /******************************************************************/ if (SQLCODE == +466) /* Result sets were returned */ { /* Establish a link between the result set and its locator */ EXEC SQL ASSOCIATE LOCATORS (:rs_loc1, :rs_loc2) WITH PROCEDURE SYSPROC.ADMIN_UTL_SCHEDULE; /* Associate a cursor with the first result set EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :rs_loc1; /* Associate a cursor with the second result set EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :rs_loc2; /* Perform fetches using C1 to retrieve all rows from the /* first result set EXEC SQL FETCH C1 INTO :objectid1, :textseq, :text; while(SQLCODE==0) { EXEC SQL FETCH C1 INTO :objectid1, :textseq, :text; } /* Perform fetches using C2 to retrieve all rows from the /* second result set EXEC SQL FETCH C2 INTO :objectid2, :utilretcd; while(SQLCODE==0) { EXEC SQL FETCH C2 INTO :objectid2, :utilretcd; } } return(retcd); } */ */ */ */
*/ */
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1370: v number-of-objects v utilities-run v highest-return-code v parallel-tasks v return-code v message In addition to the preceding output, the stored procedure returns two results sets.
1378
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The first result set is returned in the created global temporary table SYSIBM.UTILITY_SYSPRINT and contains the output from the individual utility executions. The following table shows the format of the created global temporary table SYSIBM.UTILITY_SYSPRINT:
Table 361. Result set row for first ADMIN_UTL_SCHEDULE result set Column name OBJECTID Data type INTEGER Contents A unique positive identifier for the object the utility execution is associated with Sequence number of utility execution output statements for the object whose unique identifier is specified in the OBJECTID column A utility execution output statement
TEXTSEQ
INTEGER
TEXT
VARCHAR(254)
The second result set is returned in the created global temporary table SYSIBM.UTILITY_RETCODE and contains the return code for each of the individual DSNUTILU executions. The following table shows the format of the output created global temporary table SYSIBM.UTILITY_RETCODE:
Table 362. Result set row for second ADMIN_UTL_SCHEDULE result set Column name OBJECTID Data type INTEGER Contents A unique positive identifier for the object the utility execution is associated with Return code from DSNUTILU for this utility execution
RETCODE
INTEGER
Environment
ADMIN_UTL_SORT runs in a WLM-established stored procedures address space.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v The EXECUTE privilege on the package for DSNADMUS v Ownership of the package v PACKADM authority for the package collection v SYSADM authority The owner of the package or plan that contains the CALL statement must also have SELECT authority on the following catalog tables:
Appendix J. DB2-supplied stored procedures
1379
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v v v v
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
max-per-job NULL
optimize-workload NULL
batch-execution NULL
return-code, message )
Option descriptions
max-parallel Specifies the maximum number of parallel units. The actual number may be lower than the requested number based on the optimizing sort result. Possible values are: 1 to 99. This is an input parameter of type SMALLINT and cannot be null. max-per-job Specifies the maximum number of steps per job for batch execution. Possible values are: 1 to 255 Steps per job for batch execution null Online execution
This is an input parameter of type SMALLINT. This parameter cannot be null if batch-execution is YES. optimize-workload Specifies whether the parallel units should be sorted to achieve shortest overall execution time. Possible values are: NO or null The workload is not to be sorted. YES The workload is to be sorted.
This is an input parameter of type VARCHAR(8). The default value is NO. batch-execution Indicates whether the objects should be sorted for online or batch (JCL) execution. NO or null The workload is for online execution. YES The workload is for batch execution.
1380
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
This is an input parameter of type VARCHAR(8). The default value is NO. number-of-objects As an input parameter, this specifies the number of objects that were passed in SYSIBM.UTILITY_SORT_OBJ. Possible values are: 1 to 999999. As an output parameter, this specifies the number of objects that were passed in SYSIBM.UTILITY_SORT_OBJ table that are found in the DB2 catalog. This is an input and output parameter of type INTEGER and cannot be null. parallel-units Indicates the number of recommended parallel units. This is an output parameter of type SMALLINT. max-objects Indicates the maximum number of objects in any parallel unit. This is an output parameter of type INTEGER. max-sequences Indicates the number of jobs in any parallel unit. This is an output parameter of type INTEGER. return-code Provides the return code from the stored procedure. Possible values are: 0 4 12 Sort ran successfully. The statistics for one or more sorting objects have not been gathered in the catalog or the object no longer exists. An ADMIN_UTL_SORT error occurred. The message parameter will contain details.
This is an output parameter of type INTEGER. message Contains messages describing the error encountered by the stored procedure. If no error occurred, then no message is returned. The first messages in this area are generated by the stored procedure. Messages that are generated by DB2 might follow the first messages. This is an output parameter of type VARCHAR(1331).
Additional input
In addition to the input parameters, this stored procedure reads the objects for sorting and the corresponding utility names from the created global temporary table SYSIBM.UTILITY_SORT_OBJ. The following table shows the format of the created global temporary table SYSIBM.UTILITY_SORT_OBJ:
1381
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 363. Input for the ADMIN_UTL_SORT stored procedure Column name OBJECTID Data type INTEGER Contents A unique positive identifier for the object the utility execution is associated with. When you insert multiple rows, increment OBJECTID by 1, starting at 0 for every insert. Object type: v TABLESPACE v INDEXSPACE v TABLE v INDEX v STOGROUP QUALIFIER VARCHAR(128) Qualifier (database or creator) of the object in NAME, empty or null for STOGROUP. If the qualifier is not provided and the type of the object is TABLESPACE or INDEXSPACE, then the default database is DSNDB04. If the object is of the type TABLE or INDEX, the schema is the current SQL authorization ID. If the object no longer exists, it will be ignored. Unqualified name of the object. NAME cannot be null. PART SMALLINT Partition number of the object for which the utility will be invoked. Null or 0 if the object is not partitioned.
TYPE
VARCHAR(10)
NAME
VARCHAR(128)
1382
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 363. Input for the ADMIN_UTL_SORT stored procedure (continued) Column name UTILITY_NAME Data type VARCHAR(20) Contents Utility name. UTILITY_NAME cannot be null. Recommendation: Sort objects for the same utility. Possible values are: v CHECK DATA v CHECK INDEX v CHECK LOB v COPY v COPYTOCOPY v DIAGNOSE v LOAD v MERGECOPY v MODIFY RECOVERY v MODIFY STATISTICS v QUIESCE v REBUILD INDEX v RECOVER v REORG INDEX v REORG LOB v REORG TABLESPACE v REPAIR v REPORT RECOVERY v REPORT TABLESPACESET v RUNSTATS INDEX v RUNSTATS TABLESPACE v STOSPACE v UNLOAD
Example
The following C language sample shows how to invoke ADMIN_UTL_SORT:
#include #include #include <stdio.h> <stdlib.h> <string.h>
/******************** DB2 SQL Communication Area ********************/ EXEC SQL INCLUDE SQLCA; int main( int argc, char *argv[] ) /* Argument count and list */ { /****************** DB2 Host Variables ****************************/ EXEC SQL BEGIN DECLARE SECTION; /* SYSPROC.ADMIN_UTL_SORT parameters short int maxparallel; /* short int ind_maxparallel; /* short int maxperjob; /* short int ind_maxperjob; /* char optimizeworkload[9]; /* short int ind_optimizeworkload; /* char batchexecution[9]; /* short int ind_batchexecution; /* long int numberofobjects; /* short int ind_numberofobjects; /* short int parallelunits; /* short int ind_parallelunits; /* Max parallel Indicator variable Max per job Indicator variable Optimize workload Indicator variable Batch execution Indicator variable Number of objects Indicator variable Parallel units Indicator variable */ */ */ */ */ */ */ */ */ */ */ */ */
1383
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
long int maxobjects; short int ind_maxobjects; long int maxseqs; short int ind_maxseqs; long int retcd; short int ind_retcd; char errmsg[1332]; short int ind_errmsg;
/* /* /* /* /* /* /* /* /*
Maximum objects per parallel unit Indicator variable Maximum jobs per unit Indicator variable Return code Indicator variable Error message Indicator variable
*/ */ */ */ */ */ */ */ */
/* Temporary table SYSIBM.UTILITY_SORT_OBJ columns */ long int objectid; /* Object id */ char type[11]; /* Object type (e.g. "INDEX") */ char qualifier[129]; /* Object qualifier */ short int ind_qualifier; /* Object qualifier ind. var. */ char name[129]; /* Object name (qual. or unq.)*/ short int part; /* Optional partition */ short int ind_part; /* Partition indicator var */ char utname[21]; /* Utility name */ /* Result set locators volatile SQL TYPE IS RESULT_SET_LOCATOR *rs_loc1; /* Result set row long int resobjectid; short int unit; long int unitseq; long int unitseqpos; char exclusive[2]; EXEC SQL END DECLARE SECTION; /* /* /* /* /* /* Object id Execution unit value Job seq within exec unit Pos within exec unit or step within job Exclusive execution flag */ */ */ */ */ */ */ */
/******************************************************************/ /* Set up the objects to be sorted */ /******************************************************************/ long int objid_array[4] = {0, 1, 2, 3}; char type_array[4][11] = {"TABLESPACE", "TABLESPACE", "TABLESPACE", "TABLESPACE"}; char qual_array[4][129] = {"QUAL01", "QUAL01", "QUAL01", "QUAL01"}; char name_array[4][129] = {"TBSP01", "TBSP02", "TBSP03", "TBSP04"}; short int part_array[4] = {0, 0, 0, 0}; char utname_array[4][21]= {"RUNSTATS TABLESPACE", "RUNSTATS TABLESPACE", "RUNSTATS TABLESPACE", "RUNSTATS TABLESPACE"}; int i = 0; /* Loop counter */
/******************************************************************/ /* Assign values to input parameters */ /* Set the indicator variables to 0 for non-null input parameters */ /* Set the indicator variables to -1 for null input parameters */ /******************************************************************/ maxparallel = 2; ind_maxparallel = 0; ind_maxperjob = -1; strcpy(optimizeworkload, "YES"); ind_optimizeworkload = 0; strcpy(batchexecution, "NO"); ind_batchexecution = 0; numberofobjects = 4; ind_numberofobjects = 0; /******************************************************************/ /* Clear temporary table SYSIBM.UTILITY_SORT_OBJ */
1384
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
/******************************************************************/ EXEC SQL DELETE FROM SYSIBM.UTILITY_SORT_OBJ; /******************************************************************/ /* Insert the objects into the temporary table */ /* SYSIBM.UTILITY_SORT_OBJ */ /******************************************************************/ for (i = 0; i < 4; i++) { objectid = objid_array[i]; strcpy(type, type_array[i]); strcpy(qualifier, qual_array[i]); strcpy(name, name_array[i]); part = part_array[i]; strcpy(utname, utname_array[i]); EXEC SQL INSERT INTO SYSIBM.UTILITY_SORT_OBJ (OBJECTID, TYPE, QUALIFIER, NAME, PART, UTILITY_NAME) VALUES (:objectid, :type, :qualifier, :name, :part, :utname); }; /******************************************************************/ /* Call stored procedure SYSPROC.ADMIN_UTL_SORT */ /******************************************************************/ EXEC SQL CALL SYSPROC.ADMIN_UTL_SORT (:maxparallel :ind_maxparallel, :maxperjob :ind_maxperjob, :optimizeworkload :ind_optimizeworkload, :batchexecution :ind_batchexecution, :numberofobjects :ind_numberofobjects, :parallelunits :ind_parallelunits, :maxobjects :ind_maxobjects, :maxseqs :ind_maxseqs, :retcd :ind_retcd, :errmsg :ind_errmsg); /******************************************************************/ /* Retrieve result set when the SQLCODE from the call is +446, */ /* which indicates that result sets were returned */ /******************************************************************/ if (SQLCODE == +466) /* Result sets were returned */ { /* Establish a link between the result set and its locator */ EXEC SQL ASSOCIATE LOCATORS (:rs_loc1) WITH PROCEDURE SYSPROC.ADMIN_UTL_SORT; /* Associate a cursor with the result set EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :rs_loc1; /* Perform fetches using C1 to retrieve all rows from the /* result set EXEC SQL FETCH C1 INTO :resobjectid, :unit, :unitseq, :unitseqpos, :exclusive; while(SQLCODE==0) { EXEC SQL FETCH C1 INTO :resobjectid, :unit, :unitseq, :unitseqpos, :exclusive; } } return(retcd); } */ */ */
1385
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Output
This stored procedure returns the following output parameters, which are described in Option descriptions on page 1380: v number-of-objects v parallel-units v max-objects v max-sequences v return-code v message In addition to the preceding output, the stored procedure returns one result set that contains the objects sorted into parallel execution units. The following table shows the format of the result set returned in the created global temporary table SYSIBM.UTILITY_SORT_OUT:
Table 364. Result set row for ADMIN_UTL_SORT result set Column name OBJECTID UNIT UNIT_SEQ UNIT_SEQ_POS EXCLUSIVE Data type INTEGER SMALLINT INTEGER INTEGER CHAR(1) Contents A unique positive identifier for the object Number of parallel execution unit Job sequence within parallel execution unit Step within job Requires execution with nothing running in parallel
1386
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Each of the three XML parameter documents has a name and a version, and is typically associated with one stored procedure. The three types of XML parameter documents are: v XML input document v XML output document v XML message document The XML input document is passed as input to the stored procedure. The XML output document is returned as output, and the XML message document returns messages. If the structure, attributes, or types in an XML parameter document change, the version of the XML parameter document changes. The version of all three of these documents remains in sync when you call a stored procedure. For example, if you call the GET_SYSTEM_INFO stored procedure and specify the major_version parameter as 1 and the minor_version parameter as 1, the XML input, XML output, and XML message documents will be Version 1.1 documents. Related information DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond Common DTD Apache Commons Configuration component
To determine the highest supported document version for a stored procedure, specify NULL for the major_version parameter, the minor_version parameter, and all other required parameters. The stored procedure returns the highest supported document version as values in the major_version and minor_version output parameters, and sets the xml_output and xml_message output parameters to NULL. If you specify non-null values for the major_version and minor_version parameters, you must specify a document version that is supported . If the version is invalid, the stored procedure returns an error (-20457).
1387
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
If the XML input document in the xml_input parameter specifies the Document Type Major Version and Document Type Minor Version keys, the value for those keys must be equal to the values that you specified in the major_version and minor_version parameters, or an error (+20458) is raised.
The Document Type Name key varies depending on the stored procedure. This example shows an XML input document for the GET_MESSAGE stored procedure. In addition, the values of the Document Type Major Version and Document Type Minor Version keys depend on the values that you specified in the major_version and minor_version parameters for the stored procedure. If the stored procedure is not running in Complete mode, you must specify the Document Type Name key, the required parameters, and any optional parameters that you want to specify. Specifying the Document Type Major Version and Document Type Minor Version keys are optional. If you specify the Document Type Major Version and Document Type Minor Version keys, the values must be the same as the values that you specified in the major_version and minor_version parameters. You must either specify both or omit both of the Document Type Major Version and Document Type Minor Version keys. Specifying the Document Locale key is optional. If you specify the Document Locale key, the value is ignored. Important: XML input documents must be encoded in UTF-8 and contain only English characters.
1388
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
If the stored procedure runs in Complete mode, a complete input document is returned by the xml_output parameter of the stored procedure. The returned XML document is a full XML input document that includes a Document Type and sections for all possible required and optional parameters. The returned XML input document also includes entries for Display Name, Hint, and the Document Locale. Although these entries are not required (and will be ignored) in the XML input document, they are usually needed when rendering the document in a client application. All entries in the returned XML input document can be rendered and changed in ways that are independent of the operating system or data server. Subsequently, the modified XML input document can be passed in the xml_input parameter in a new call to the same stored procedure. This enables you to programmatically create valid xml_input documents.
The Document Type Name key varies depending on the stored procedure. This example shows an XML output document for the GET_CONFIG stored procedure. In addition, the values of the Document Type Major Version and Document Type Minor Version keys depend on the values that you specified in the major_version and minor_version parameters for the stored procedure. Entries in the XML output document are grouped by using nested dictionaries. Each entry in the XML output document describes a single piece of information. In general, an XML output document is comprised of Display Name, Value, and Hint, as shown in the following example:
<key>SQL Domain</key> <dict> <key>Display Name</key>
Appendix J. DB2-supplied stored procedures
1389
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
XML output documents are generated in UTF-8 and contain only English characters.
The stored procedure returns the string 8.1.5 in the xml_output parameter if the value of the Data Server Product Version is 8.1.5. Therefore, the stored procedure call returns a single value rather than an XML document.
1390
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
corresponding SQL message to the caller. When this occurs, the procedure returns an XML message document in the xml_message parameter that contains additional information about the warning. An XML message document contains key and value pairs followed by details about an SQL warning condition. The general structure of an XML message document is as follows:
<?xml version="1.0" encoding="UTF-8"?> <plist version="1.0"> <dict> <key>Document Type Name</key> <string>Data Server Message</string> <key>Document Type Major Version</key><integer>1</integer> <key>Document Type Minor Version</key><integer>0</integer> <key>Data Server Product Name</key><string>DSN</string> <key>Data Server Product Version</key><string>8.1.5</string> <key>Data Server Major Version</key><integer>8</integer> <key>Data Server Minor Version</key><integer>1</integer> <key>Data Server Platform</key><string>z/OS</string> <key>Document Locale</key><string>en_US</string> --- Details about an SQL warning condition are included here. --</dict> </plist>
The details about an SQL warning will be encapsulated in a dictionary entry, which is comprised of Display Name, Value, and Hint, as shown in the following example:
<key>Short Message Text</key> <dict> <key>Display Name</key><string>Short Message Text</string> <key>Value</key> <string>DSNA630I DSNADMGC A PARAMETER FORMAT OR CONTENT ERROR WAS FOUND. The XML input document must be empty or NULL.</string> <key>Hint</key><string /> </dict>
XML message documents are generated in UTF-8 and contain only English characters.
1391
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Environment
The GET_CONFIG stored procedure runs in a WLM-established stored procedures address space.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have EXECUTE privilege on the GET_CONFIG stored procedure.
Syntax
, xml_output , xml_message )
Option descriptions
major_version An input and output parameter of type INTEGER that indicates the major document version. On input, this parameter indicates the major document version that you support for the XML documents that are passed as parameters in the stored procedure (xml_input, xml_output, and xml_message). The stored procedure processes all XML documents in the specified version, or returns an error (-20457) if the version is invalid. On output, this parameter specifies the highest major document version that is supported by the procedure. To determine the highest supported document version, specify NULL for this input parameter and all other required parameters. Currently, the highest major document version that is supported is 2. Major document version 1 is also supported. This parameter is used in conjunction with the minor_version parameter. Therefore, you must specify both parameters together. For example, you must specify both as either NULL, or non-NULL. minor_version An input and output parameter of type INTEGER that indicates the minor document version. On input, this parameter specifies the minor document version that you support for the XML documents that are passed as parameters for this stored procedure (xml_input, xml_output, and xml_message). The stored procedure processes all XML documents in the specified version, or returns an error (-20457) if the version is invalid. On output, this parameter indicates the highest minor document version that is supported for the highest supported major version. To determine the highest supported document version, specify NULL for this input parameter and all other required parameters. Currently, the highest and only minor document version that is supported is 0 (zero).
1392
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
This parameter is used in conjunction with the major_version parameter. Therefore, you must specify both parameters together. For example, you must specify both as either NULL, or non-NULL. requested_locale An input parameter of type VARCHAR(33) that specifies a locale. If the specified language is supported on the server, translated content is returned in the xml_output and xml_message parameters. Otherwise, content is returned in the default language. Only the language and possibly the territory information is used from the locale. The locale is not used to format numbers or influence the document encoding. For example, key names are not translated. The only translated portion of XML output and XML message documents are Display Name, Display Unit, and Hint. The value might be globalized where applicable. You should always compare the requested language to the language that is used in the XML output document (see the Document Locale entry in the XML output document). Currently, the supported values for requested_locale are en_US and NULL. If you specify a null value, the result is the same as specifying en_US. xml_input An input parameter of type BLOB(2G) that specifies an XML input document of type Data Server Configuration Input in UTF-8 that contains input values for the stored procedure. To pass an XML input document to the stored procedure, you must specify the major_version parameter as 2 and the minor_version parameter as 0 (zero). For a non-data sharing system, a sample of a Version 2.0 XML input document is as follows:
<plist version="1.0"> <?xml version="1.0" encoding="UTF-8" ?> <dict> <key>Document Type Name</key> <string>Data Server Configuration Input</string> <key>Document Type Major Version</key> <integer>2</integer> <key>Document Type Minor Version</key> <integer>0</integer> <key>Document Locale</key> <string>en_US</string> <key>Complete</key><false/> <key>Optional Parameters</key> <dict> <key>Include</key> <dict> <key>Value</key> <array> <string>DB2 Subsystem Status Information</string> <string>DB2 Subsystem Parameters</string> <string>DB2 Distributed Access Information</string> <string>Active Log Data Set Information</string> <string>Time of Last DB2 Restart</string> <string>Resource Limit Facility Information</string> <string>Connected DB2 Subsystem</string> </array> </dict> </dict> </dict> </plist>
For a data sharing system, a sample of a Version 2.0 XML input document is as follows:
Appendix J. DB2-supplied stored procedures
1393
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
<plist version="1.0"> <?xml version="1.0" encoding="UTF-8" ?> <dict> <key>Document Type Name</key> <string>Data Server Configuration Input</string> <key>Document Type Major Version</key> <integer>2</integer> <key>Document Type Minor Version</key> <integer>0</integer> <key>Document Locale</key> <string>en_US</string> <key>Complete</key><false/> <key>Optional Parameters</key> <dict> <key>Include</key> <dict> <key>Value</key> <array> <string>Common Data Sharing Group Information</string> <string>DB2 Subsystem Status Information</string> <string>DB2 Subsystem Parameters</string> <string>DB2 Distributed Access Information</string> <string>Active Log Data Set Information</string> <string>Time of Last DB2 Restart</string> <string>Resource Limit Facility Information</string> <string>Connected DB2 Subsystem</string> </array> </dict> <key>DB2 Data Sharing Group Members</key> <dict> <key>Value</key> <array> <string>DB2A</string> <string>DB2B</string> </array> </dict> </dict> </dict> </plist>
When passing an XML input document to the stored procedure, you must specify the Document Type Name key. In a non-data sharing system, you must specify the Include parameter. In a data sharing system, you must specify at least one of the following parameters: v Include v DB2 Data Sharing Group Members If no XML input document is passed to the stored procedure, and you specified the major_version parameter as 2 and the minor_version parameter as 0 (zero), the stored procedure returns the following parameters for a non-data sharing system in a Version 2.0 XML output document by default: v v v v v v v DB2 Subsystem Status Information DB2 Subsystem Parameters DB2 Distributed Access Information Active Log Data Set Information Time of Last DB2 Restart Resource Limit Facility Information Connected DB2 Subsystem
For a data sharing system, the same information is returned for each member of a data sharing group, plus the Common Data Sharing Group Information parameter.
1394
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
If you passed a Version 2.0 XML input document to the stored procedure, the stored procedure returns the information in a Version 2.0 XML output document. The information returned is dependent on what you specified in the Include array and in the DB2 Data Sharing Group Members array (if applicable). For a non-data sharing system, the items that are specified in the Include array are returned. For a data sharing system, the following information is returned: v The items that are specified in the Include array for each DB2 member that is specified in the DB2 Data Sharing Group Members array, if both the Include parameter and the DB2 Data Sharing Group Members parameter are specified. v The items that are specified in the Include array for every DB2 member in the data sharing group, if only the Include parameter is specified. v The Common Data Sharing Group Information and the following items for each member that is specified in the DB2 Data Sharing Group Members array, if only the DB2 Data Sharing Group Members parameter is specified: DB2 Subsystem Status Information DB2 Subsystem Parameters DB2 Distributed Access Information Active Log Data Set Information Time of Last DB2 Restart Resource Limit Facility Information
Connected DB2 Subsystem Note: If the Common Data Sharing Group Information item is specified in the Include array, this information is returned only once for the data sharing group. This information is not returned repeatedly for every DB2 member that is processed. Complete mode: For an example of a Version 2.0 XML input document that is returned by the xml_output parameter when the stored procedure is running in Complete mode in a non-data sharing system, see Example 4 in the Examples section. For an example of a Version 2.0 XML input document that is returned by the xml_output parameter when the stored procedure is running in Complete mode in a data sharing system with two DB2 members, DB2A and DB2B, see Example 5. xml_filter An input parameter of type BLOB(4K) in UTF-8 that specifies a valid XPath query string. Use a filter when you want to retrieve a single value from an XML output document. For more information, see XPath expressions for filtering output on page 1390. The following example selects the value for the Data Server Product Version from the XML output document:
/plist/dict/key[.='Data Server Product Version']/following-sibling::string[1]
If the key is not followed by the specified sibling, an error is returned. xml_output An output parameter of type BLOB(2G) that returns a complete XML output document of type Data Server Configuration Output in UTF-8. If a filter is specified, this parameter returns a string value. If the stored procedure is
1395
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
unable to return a complete output document (for example, if a processing error occurs that results in an SQL warning or error), this parameter is set to NULL. The xml_output parameter can return either a Version 1.0 or Version 2.0 XML output document depending on the major_version and minor_version parameters that you specify. For information about the content of a Version 2.0 XML output document, see the option description for the xml_input parameter. For a sample Version 1.0 XML output document, see Example 1 in the Examples section. For a sample Version 2.0 XML output document in a non-data sharing system, see Example 6. For a sample Version 2.0 XML output document in a data sharing system, see Example 7. xml_message An output parameter of type BLOB(64K) that returns a complete XML output document of type Data Server Message in UTF-8 that provides detailed information about an SQL warning condition. This document is returned when a call to the stored procedure results in an SQL warning, and the warning message indicates that additional information is returned in the XML message output document. If the warning message does not indicate that additional information is returned, then this parameter is set to NULL. The xml_message parameter can return either a Version 1.0 or Version 2.0 XML message document depending on the major_version and minor_version parameters that you specify. For an example of an XML message document, see Example 2. If the GET_CONFIG stored procedure is processing more than one DB2 member in a data sharing system and an error is encountered when processing one of the DB2 members, the stored procedure specifies the name of the DB2 member that is causing the error as the value of the DB2 Object key in the XML message document. The value of the Short Message Text key applies to the DB2 member that is specified. The following example shows a fragment of a Version 2.0 XML message document with the DB2 Object key specified:
<key>Short Message Text</key> <dict> <key>Display Name</key> <string>Short Message Text</string> <key>Value</key> <string>DSNA6xxI DSNADMGC .....</string> <key>DB2 Object</key> <string>DB2B</string> <key>Hint</key> <string /> </dict>
Examples
Example 1: The following example shows a fragment of a Version 1.0 XML output document for the GET_CONFIG stored procedure for a data sharing member. For a non-data sharing member, the following entries in the DB2 Distributed Access Information item are not included: Resynchronization Domain, Alias List, Member IPv4 Address, and Location Server List.
1396
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The two major sections that the XML output document always contains are Common Data Sharing Group Information and DB2 Subsystem Specific Information. In this example, the ellipsis (. . .) represent a dictionary entry that is comprised of Display Name, Value, and Hint, such as:
<dict> <key>Display Name</key> <string>DDF Status</string> <key>Value</key> <string>STARTD</string> <key>Hint</key> <string /> </dict> <?xml version="1.0" encoding="UTF-8"?> <plist version="1.0"> <dict> <key>Document Type Name</key> <string>Data Server Configuration Output</string> <key>Document Type Major Version</key> <integer>1</integer> <key>Document Type Minor Version</key> <integer>0</integer> <key>Data Server Product Name</key> <string>DSN</string> <key>Data Server Product Version</key> <string>8.1.5</string> <key>Data Server Major Version</key> <integer>8</integer> <key>Data Server Minor Version</key> <integer>1</integer> <key>Data Server Platform</key> <string>z/OS</string> <key>Document Locale</key> <string>en_US</string> <key>Common Data Sharing Group Information</key> <dict> <key>Display Name</key> <string>Common Data Sharing Group Information</string> <key>Data Sharing Group Name</key> ... <key>Data Sharing Group Level</key> ... <key>Data Sharing Group Mode</key> ... <key>Data Sharing Group Protocol Level</key> ... <key>Data Sharing Group Attach Name</key> ... <key>SCA Structure Size</key> ... <key>SCA Status</key> ... <key>SCA in Use</key> ... <key>LOCK1 Structure Size</key> ... <key>Number of Lock Entries</key> ... <key>Number of List Entries</key> ... <key>List Entries in Use</key> ... <key>Hint</key><string></string> </dict> <key>DB2 Subsystem Specific Information</key>
Appendix J. DB2-supplied stored procedures
1397
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
<dict> <key>Display Name</key> <string>DB2 Subsystem Specific Information</string> <key>V81A</key> <dict> <key>Display Name</key> <string>V81A</string> <key>DB2 Subsystem Status Information</key> <dict> <key>Display Name</key> <string>DB2 Subsystem Status Information</string> <key>DB2 Member Identifier</key> ... <key>DB2 Member Name</key> ... <key>DB2 Command Prefix</key> ... <key>DB2 Status</key> ... <key>DB2 System Level</key> ... <key>System Name</key> ... <key>IRLM Subsystem Name</key> ... <key>IRLM Procedure Name</key> ... <key>Parallel Coordinator</key> ... <key>Parallel Assistant</key> ... <key>Hint</key><string></string> </dict> <key>DB2 Subsystem Parameters</key> <dict> <key>Display Name</key> <string>DB2 Subsystem Parameters</string> <key>DSNHDECP</key> <dict> <key>Display Name</key> <string>DSNHDECP</string> <key>AGCCSID</key> <dict> <key>Display Name</key> <string>AGCCSID</string> <key>Installation Panel Name</key> ... <key>Installation Panel Field Name</key> ... <key>Location on Installation Panel</key> ... <key>Subsystem Parameter Value</key> ... <key>Hint</key><string></string> </dict> --- This is only a fragment of the DSNHDECP parameters that are returned by the GET_CONFIG stored procedure. --<key>Hint</key><string></string> </dict> --- This is only a fragment of the DB2 subsystem parameters that are returned by the GET_CONFIG stored procedure. ---
1398
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
<key>Hint</key><string></string> </dict> <key>DB2 Distributed Access Information</key> <dict> <key>Display Name</key> <string>DB2 Distributed Access Information</string> <key>DDF Status</key> ... <key>Location Name</key> ... <key>LU Name</key> ... <key>Generic LU Name</key> ... <key>TCP/IP Port</key> ... <key>Resynchronization Port</key> ... <key>IPv4 Address</key> ... <key>SQL Domain</key> ... <key>Resynchronization Domain</key> ... <key>Alias List</key> <dict> <key>Display Name</key> <string>Alias List</string> <key>1</key> <dict> <key>Display Name</key> <string>1</string> <key>Name</key> ... <key>Port</key> ... <key>Hint</key><string /> </dict> <key>2</key> <dict> <key>Display Name</key> <string>2</string> <key>Name</key> ... <key>Port</key> ... <key>Hint</key><string /> </dict> <key>Hint</key><string /> </dict> <key>Member IPv4 Address</key> ... <key>DT - DDF Thread Value</key> ... <key>CONDBAT - Maximum Inbound Connections</key> ... <key>MDBAT - Maximum Concurrent Active DBATs</key> ... <key>ADBAT - Active DBATs</key> ... <key>QUEDBAT - Times that ADBAT Reached MDBAT Limit</key> ... <key>INADBAT - Inactive DBATs (Type 1)</key> ... <key>CONQUED - Queued Connections</key>
Appendix J. DB2-supplied stored procedures
1399
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
... <key>DSCDBAT - Pooled DBATs</key> ... <key>INACONN - Inactive Connections (Type 2)</key> ... <key>Location Server List</key> <dict> <key>Display Name</key> <string>Location Server List</string> <key>1</key> <dict> <key>Display Name</key> <string>1</string> <key>Weight</key> ... <key>IPv4 Address</key> ... <key>Hint</key><string /> </dict> <key>2</key> <dict> <key>Display Name</key> <string>2</string> <key>Weight</key> ... <key>IPv4 Address</key> ... <key>Hint</key><string /> </dict> <key>Hint</key><string></string> </dict> <key>Hint</key><string></string> </dict> <key>Active Log Data Set Information</key> <dict> <key>Display Name</key> <string>Active Log Data Set Information</string> <key>Active Log Copy 01</key> <dict> <key>Display Name</key> <string>Active Log Copy 01</string> <key>Data Set Name</key> ... <key>Data Set Volumes</key> <dict> <key>Display Name</key> <string>Data Set Volumes</string> <key>Value</key> <array> <string>CATLGJ</string> </array> <key>Hint</key><string></string> </dict> <key>Hint</key><string></string> </dict> <key>Active Log Copy 02</key> <dict> --- The format of this dictionary entry is the same as that of Active Log Copy 01. --</dict> <key>Hint</key><string></string> </dict> <key>Time of Last DB2 Restart</key> ...
1400
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
<key>Resource Limit Facility Information</key> <dict> <key>Display Name</key> <string>Resource Limit Facility Information</string> <key>RLF Table Names</key> <dict> <key>Display Name</key> <string>RLF Table Names</string> <key>Value</key> <array> <string>SYSADM.DSNRLST01</string> </array> <key>Hint</key><string></string> </dict> <key>Hint</key><string></string> </dict> <key>Connected DB2 Subsystem</key> ... <key>Hint</key><string></string> </dict> <key>Hint</key><string></string> </dict> <key>Hint</key><string></string> </dict> </plist>
Example 2: The following example shows a sample XML message document for the GET_CONFIG stored procedure. Similar to an XML output document, the details about an SQL warning condition are encapsulated in a dictionary entry, which is comprised of Display Name, Value, and Hint.
<?xml version="1.0" encoding="UTF-8" ?> <plist version="1.0"> <dict> <key>Document Type Name</key><string>Data Server Message</string> <key>Document Type Major Version</key><integer>1</integer> <key>Document Type Minor Version</key><integer>0</integer> <key>Data Server Product Name</key><string>DSN</string> <key>Data Server Product Version</key><string>8.1.5</string> <key>Data Server Major Version</key><integer>8</integer> <key>Data Server Minor Version</key><integer>1</integer> <key>Data Server Platform</key><string>z/OS</string> <key>Document Locale</key><string>en_US</string> <key>Short Message Text</key> <dict> <key>Display Name</key><string>Short Message Text</string> <key>Value</key> <string>DSNA630I DSNADMGC A PARAMETER FORMAT OR CONTENT ERROR WAS FOUND. The XML input document must be empty or NULL.</string> <key>Hint</key><string /> </dict> </dict> </plist>
Example 3: This example shows a simple and static Java program that calls the GET_CONFIG stored procedure with an XPath that queries the value of the data servers IP address. The XPath is statically created as a string object by the program, and then converted to a BLOB to serve as input for the xml_filter parameter. After the stored procedure is called, the xml_output parameter contains only a single string and no XML document. This output is materialized as a file called xml_output.xml that is in the same directory where the GetConfDriver class resides.
1401
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
//*************************************************************************** // Licensed Materials - Property of IBM // 5625-DB2 // (C) COPYRIGHT 1982, 2006 IBM Corp. All Rights Reserved. // // STATUS = Version 8 //*************************************************************************** // Source file name: GetConfDriver.java // // Sample: How to call SYSPROC.GET_CONFIG with a valid XPath to extract the // IP Address. // //The user runs the program by issuing: //java GetConfDriver <alias or //server/database> <userid> <password> // //The arguments are: //<alias> - DB2 subsystem alias for type 2 or //server/database for type 4 // connectivity //<userid> - user ID to connect as //<password> - password to connect with //*************************************************************************** import java.io.*; import java.sql.*; public class GetConfDriver { public static void main (String[] args) { Connection con = null; CallableStatement cstmt = null; String driver = "com.ibm.db2.jcc.DB2Driver"; String url = "jdbc:db2:"; String userid = null; String password = null; // Parse arguments if (args.length != 3) { System.err.println("Usage: GetConfDriver <alias or //server/database> <userid> <password>"); System.err.println("where <alias or //server/database> is DB2 subsystem alias or //server/database for type 4 connectivity"); System.err.println(" <userid> is user ID to connect as"); System.err.println(" <password> is password to connect with"); return; } url += args[0]; userid = args[1]; password = args[2]; try { byte[] xml_input; String str_xmlfilter = new String( "/plist/dict/key[.='DB2 Subsystem Specific Information']/followingsibling::dict[1]" + "/key[.='V91A']/following-sibling::dict[1]" + "/key[.='DB2 Distributed Access Information']/following-sibling::dict[1]" + "/key[.='IP Address']/following-sibling::dict[1]" + "/key[.='Value']/following-sibling::string[1]"); /* Convert XML_FILTER to byte array to pass as BLOB */ byte[] xml_filter = str_xmlfilter.getBytes("UTF-8"); // Load the DB2 Universal JDBC Driver Class.forName(driver);
1402
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
// Connect to database con = DriverManager.getConnection(url, userid, password); con.setAutoCommit(false); cstmt = con.prepareCall("CALL SYSPROC.GET_CONFIG(?,?,?,?,?,?,?)"); // Major / Minor Version / Requested Locale cstmt.setInt(1, 1); cstmt.setInt(2, 0); cstmt.setString(3, "en_US"); // No Input document cstmt.setObject(4, null, Types.BLOB); cstmt.setObject(5, xml_filter, Types.BLOB); // Output Parms cstmt.registerOutParameter(1, cstmt.registerOutParameter(2, cstmt.registerOutParameter(6, cstmt.registerOutParameter(7, cstmt.execute(); con.commit(); SQLWarning ctstmt_warning = cstmt.getWarnings(); if (ctstmt_warning != null) { System.out.println("SQL Warning: " + ctstmt_warning.getMessage()); } else { System.out.println("SQL Warning: None\r\n"); } System.out.println("Major Version returned " + cstmt.getInt(1) ); System.out.println("Minor Version returned " + cstmt.getInt(2) ); /* get output BLOBs */ Blob b_out = cstmt.getBlob(6); if(b_out != null) { int out_length = (int)b_out.length(); byte[] bxml_output = new byte[out_length]; /* open an inputstream on BLOB data */ InputStream instr_out = b_out.getBinaryStream(); /* copy from inputstream into byte array */ int out_len = instr_out.read(bxml_output, 0, out_length); /* write byte array into FileOutputStream */ FileOutputStream fxml_out = new FileOutputStream("xml_output.xml"); /* write byte array content into FileOutputStream */ fxml_out.write(bxml_output, 0, out_length ); //Close streams instr_out.close(); fxml_out.close(); } Blob b_msg = cstmt.getBlob(7); if(b_msg != null) { int msg_length = (int)b_msg.length(); byte[] bxml_message = new byte[msg_length]; /* open an inputstream on BLOB data */
Appendix J. DB2-supplied stored procedures
1403
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
InputStream instr_msg = b_msg.getBinaryStream(); /* copy from inputstream into byte array */ int msg_len = instr_msg.read(bxml_message, 0, msg_length); /* write byte array content into FileOutputStream */ FileOutputStream fxml_msg = new FileOutputStream(new File ("xml_message.xml")); fxml_msg.write(bxml_message, 0, msg_length); //Close streams instr_msg.close(); fxml_msg.close(); } } catch (SQLException sqle) { System.out.println("Error during CALL " + " SQLSTATE = " + sqle.getSQLState() + " SQLCODE = " + sqle.getErrorCode() + " : " + sqle.getMessage()); } catch (Exception e) { System.out.println("Internal Error " + e.toString()); } finally { if(cstmt != null) try { cstmt.close(); } catch ( SQLException sqle) { sqle.printStackTrace(); } if(con != null) try { con.close(); } catch ( SQLException sqle) { sqle.printStackTrace(); } } } }
Example 4: The following example shows a Version 2.0 XML input document that is returned by the xml_output parameter when the stored procedure is running in Complete mode in a non-data sharing system:
<plist version="1.0"> <?xml version="1.0" encoding="UTF-8" ?> <dict> <key>Document Type Name</key> <string>Data Server Configuration Input</string> <key>Document Type Major Version</key> <integer>2</integer> <key>Document Type Minor Version</key> <integer>0</integer> <key>Document Locale</key> <string>en_US</string> <key>Optional Parameters</key> <dict> <key>Display Name</key> <string>Optional Parameters</string> <key>Include</key> <dict> <key>Display Name</key> <string>Include</string> <key>Value</key> <array> <string>DB2 Subsystem Status Information</string> <string>DB2 Subsystem Parameters</string> <string>DB2 Distributed Access Information</string> <string>Active Log Data Set Information</string> <string>Time of Last DB2 Restart</string> <string>Resource Limit Facility Information</string>
1404
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
<string>Connected DB2 Subsystem</string> </array> <key>Hint</key><string /> </dict> <key>Hint</key><string /> </dict> </dict> </plist>
Example 5: The following example shows a Version 2.0 XML input document that is returned by the xml_output parameter when the stored procedure is running in Complete mode in a data sharing system with two DB2 members, DB2A and DB2B:
<plist version="1.0"> <?xml version="1.0" encoding="UTF-8" ?> <dict> <key>Document Type Name</key> <string>Data Server Configuration Input</string> <key>Document Type Major Version</key> <integer>2</integer> <key>Document Type Minor Version</key> <integer>0</integer> <key>Document Locale</key> <string>en_US</string> <key>Optional Parameters</key> <dict> <key>Display Name</key> <string>Optional Parameters</string> <key>Include</key> <dict> <key>Display Name</key> <string>Include</string> <key>Value</key> <array> <string>Common Data Sharing Group Information</string> <string>DB2 Subsystem Status Information</string> <string>DB2 Subsystem Parameters</string> <string>DB2 Distributed Access Information</string> <string>Active Log Data Set Information</string> <string>Time of Last DB2 Restart</string> <string>Resource Limit Facility Information</string> <string>Connected DB2 Subsystem</string> </array> <key>Hint</key><string /> </dict> <key>DB2 Data Sharing Group Members</key> <dict> <key>Display Name</key> <string>DB2 Data Sharing Group Members</string> <key>Value</key> <array> <string>DB2A</string> <string>DB2B</string> </array> <key>Hint</key><string /> </dict> <key>Hint</key><string /> </dict> </dict> </plist>
Example 6: This example shows a fragment of a Version 2.0 XML output document for the GET_CONFIG stored procedure in a non-data sharing system. An XML input document is not passed to the stored procedure. The ellipsis (. . .) represent a
Appendix J. DB2-supplied stored procedures
1405
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
dictionary entry that is comprised of Display Name, Value, and Hint, as in the following example, or an entry that is the same as the corresponding entry in a Version 1.0 XML output document:
<dict> <key>Display Name</key> <string>DDF Status</string> <key>Value</key> <string>STARTD</string> <key>Hint</key> <string /> </dict> <?xml version="1.0" encoding="UTF-8" ?> <plist version="1.0"> <dict> <key>Document Type Name</key> <string>Data Server Configuration Output</string> <key>Document Type Major Version</key> <integer>2</integer> <key>Document Type Minor Version</key> <integer>0</integer> <key>Data Server Product Name</key> <string>DSN</string> <key>Data Server Product Version</key> <string>8.1.5</string> <key>Data Server Major Version</key> <integer>8</integer> <key>Data Server Minor Version</key> <integer>1</integer> <key>Data Server Platform</key> <string>z/OS</string> <key>Document Locale</key> <string>en_US</string> <key>DB2 Subsystem Specific Information</key> <dict> <key>Display Name</key> <string>DB2 Subsystem Specific Information</string> <key>V81A</key> <dict> <key>Display Name</key> <string>V81A</string> <key>DB2 Subsystem Status Information</key> <dict>...</dict> <key>DB2 Subsystem Parameters</key> <dict>...</dict> <key>DB2 Distributed Access Information</key> <dict> <key>Display Name</key> <string>DB2 Distributed Access Information</string> <key>DDF Status</key> ... <key>Location Name</key> ... <key>LU Name</key> ... <key>Generic LU Name</key> ... <key>TCP/IP Port</key> ... <key>Resynchronization Port</key> ... <key>IPv4 Address</key> ... <key>SQL Domain</key> ... <key>DT - DDF Thread Value</key> ... <key>CONDBAT - Maximum Inbound Connections</key> ... <key>MDBAT - Maximum Concurrent Active DBATs</key> ... <key>ADBAT - Active DBATs</key> ... <key>QUEDBAT - Times that ADBAT Reached MDBAT Limit</key> ... <key>INADBAT - Inactive DBATs (Type 1)</key> ... <key>CONQUED - Queued Connections</key> ... <key>DSCDBAT - Pooled DBATs</key> ... <key>INACONN - Inactive Connections (Type 2)</key> ... <key>Hint</key><string></string>
1406
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
</dict> <key>Active Log Data Set Information</key> <dict>...</dict> <key>Time of Last DB2 Restart</key> <dict>...</dict> <key>Resource Limit Facility Information</key> <dict> <key>Display Name</key> <string>Resource Limit Facility Information</string> <key>RLF Status</key> <dict> <key>Display Name</key> <string>RLF Status</string> <key>Value</key><string>Active</string> <key>Hint</key><string /> </dict> <key>RLF Table Names</key> <dict> <key>Display Name</key> <string>RLF Table Names</string> <key>Value</key> <array> <string>SYSADM.DSNRLST01</string> </array> <key>Hint</key><string /> </dict> <key>Hint</key><string /> </dict> <key>Connected DB2 Subsystem</key> <dict>...</dict> <key>Hint</key><string /> </dict> <key>Hint</key><string /> </dict> </dict> </plist>
Example 7: This example shows a fragment of a Version 2.0 XML output document for the GET_CONFIG stored procedure in a data sharing system with two DB2 members, DB2A and DB2B. An XML input document is not passed to the stored procedure. The ellipsis (. . .) represent a dictionary entry that is comprised of Display Name, Value, and Hint, as in the following example, or an entry that is the same as the corresponding entry in a Version 1.0 XML output document:
<dict> <key>Display Name</key> <string>DDF Status</string> <key>Value</key> <string>STARTD</string> <key>Hint</key> <string /> </dict> <?xml version="1.0" encoding="UTF-8" ?> <plist version="1.0"> <dict> <key>Document Type Name</key> <string>Data Server Configuration Output</string> <key>Document Type Major Version</key> <integer>2</integer> <key>Document Type Minor Version</key> <integer>0</integer> <key>Data Server Product Name</key> <string>DSN</string> <key>Data Server Product Version</key> <string>8.1.5</string> <key>Data Server Major Version</key>
Appendix J. DB2-supplied stored procedures
1407
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
<integer>8</integer> <key>Data Server Minor Version</key> <integer>1</integer> <key>Data Server Platform</key> <string>z/OS</string> <key>Document Locale</key> <string>en_US</string> <key>Common Data Sharing Group Information</key> <dict> <key>Display Name</key> <string>Common Data Sharing Group Information</string> <key>Data Sharing Group Name</key> <dict>...</dict> <key>Data Sharing Group Level</key> <dict>...</dict> <key>Data Sharing Group Mode</key> <dict>...</dict> <key>Data Sharing Group Protocol Level</key> <dict>...</dict> <key>Data Sharing Group Attach Name</key> <dict>...</dict> <key>SCA Structure Size</key> <dict>...</dict> <key>SCA Status</key> <dict>...</dict> <key>SCA in Use</key> <dict>...</dict> <key>LOCK1 Structure Size</key> <dict>...</dict> <key>Number of Lock Entries</key> <dict>...</dict> <key>Number of List Entries</key> <dict>...</dict> <key>List Entries in Use</key> <dict>...</dict> <key>Hint</key><string /> </dict> <key>DB2 Subsystem Specific Information</key> <dict> <key>Display Name</key> <string>DB2 Subsystem Specific Information</string> <key>DB2A</key> <dict> <key>Display Name</key> <string>DB2A</string> <key>DB2 Subsystem Status Information</key> <dict>...</dict> <key>DB2 Subsystem Parameters</key> <dict>...</dict> <key>DB2 Distributed Access Information</key> <dict> <key>Display Name</key> <string>DB2 Distributed Access Information</string> <key>DDF Status</key> ... <key>Location Name</key> ... <key>LU Name</key> ... <key>Generic LU Name</key> ... <key>TCP/IP Port</key> ... <key>Resynchronization Port</key> ... <key>IPv4 Address</key> ... <key>SQL Domain</key> ... <key>Resynchronization Domain</key> ... <key>Alias List</key> <dict> <key>Display Name</key> <string>Alias List</string> <key>1</key>
1408
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
<dict> <key>Display Name</key> <string>1</string> <key>Name</key> ... <key>Port</key> ... <key>Hint</key><string /> </dict> <key>2</key> <dict> <key>Display Name</key> <string>2</string> <key>Name</key> ... <key>Port</key> ... <key>Hint</key><string /> </dict> <key>Hint</key><string /> </dict> <key>Member IPv4 Address</key> ... <key>DT - DDF Thread Value</key> ... <key>CONDBAT - Maximum Inbound Connections</key> ... <key>MDBAT - Maximum Concurrent Active DBATs</key> ... <key>ADBAT - Active DBATs</key> ... <key>QUEDBAT - Times that ADBAT Reached MDBAT Limit</key> ... <key>INADBAT - Inactive DBATs (Type 1)</key> ... <key>CONQUED - Queued Connections</key> ... <key>DSCDBAT - Pooled DBATs</key> ... <key>INACONN - Inactive Connections (Type 2)</key> ... <key>Location Server List</key> ... <dict> <key>Display Name</key> <string>Location Server List</string> <key>1</key> <dict> <key>Display Name</key> <string>1</string> <key>Weight</key> ... <key>IPv4 Address</key> ... <key>Hint</key><string /> </dict> <key>2</key> <dict> <key>Display Name</key> <string>2</string> <key>Weight</key> ... <key>IPv4 Address</key> ... <key>Hint</key><string /> </dict> <key>Hint</key><string /> </dict> <key>Hint</key><string></string> </dict> <key>Active Log Data Set Information</key> <dict>...</dict> <key>Time of Last DB2 Restart</key> <dict>...</dict> <key>Resource Limit Facility Information</key> <dict> <key>Display Name</key> <string>Resource Limit Facility Information</string> <key>RLF Status</key> <dict> <key>Display Name</key> <string>RLF Status</string> <key>Value</key><string>Active</string> <key>Hint</key><string /> </dict> <key>RLF Table Names</key>
Appendix J. DB2-supplied stored procedures
1409
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
<dict> <key>Display Name</key> <string>RLF Table Names</string> <key>Value</key> <array> <string>SYSADM.DSNRLST01</string> </array> <key>Hint</key><string /> </dict> <key>Hint</key><string /> </dict> <key>Connected DB2 Subsystem</key> <dict>...</dict> <key>Hint</key><string /> </dict> <key>DB2B</key> <dict> --- This dictionary entry describes the second DB2 member, DB2B. Its format is the same as that of member DB2A. --</dict> <key>Hint</key><string /> </dict> </dict> </plist>
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have EXECUTE privilege on the GET_MESSAGE stored procedure.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
, xml_output , xml_message )
Option descriptions
major_version An input and output parameter of type INTEGER that indicates the major document version. On input, this parameter indicates the major document version that you support for the XML documents that are passed as parameters
1410
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
in the stored procedure (xml_input, xml_output, and xml_message). The stored procedure processes all XML documents in the specified version, or returns an error (-20457) if the version is invalid. On output, this parameter specifies the highest major document version that is supported by the stored procedure. To determine the highest supported document version, specify NULL for this input parameter and all other required parameters. Currently, the highest and only major document version that is supported is 1. If the XML document in the xml_input parameter specifies the Document Type Major Version key, the value for that key must be equal to the value provided in the major_version parameter, or an error (+20458) is raised. This parameter is used in conjunction with the minor_version parameter. Therefore, you must specify both parameters together. For example, you must specify both as either NULL, or non-NULL. minor_version An input and output parameter of type INTEGER that indicates the minor document version. On input, this parameter specifies the minor document version that you support for the XML documents that are passed as parameters for this stored procedure (xml_input, xml_output, and xml_message). The stored procedure processes all XML documents in the specified version, or returns an error (-20457) if the version is invalid. On output, this parameter indicates the highest minor document version that is supported for the highest supported major version. To determine the highest supported document version, specify NULL for this input parameter and all other required parameters. Currently, the highest and only minor document version that is supported is 0 (zero). If the XML document in the xml_input parameter specifies the Document Type Minor Version key, the value for that key must be equal to the value provided in the minor_version parameter, or an error (+20458) is raised. This parameter is used in conjunction with the major_version parameter. Therefore, you must specify both parameters together. For example, you must specify both as either NULL, or non-NULL. requested_locale An input parameter of type VARCHAR(33) that specifies a locale. If the specified language is supported on the server, translated content is returned in the xml_output and xml_message parameters. Otherwise, content is returned in the default language. Only the language and possibly the territory information is used from the locale. The locale is not used to format numbers or influence the document encoding. For example, key names are not translated. The only translated portion of the XML output and XML message documents are Display Name, Display Unit, and Hint. The value might be globalized where applicable. You should always compare the requested language to the language that is used in the XML output document (see the Document Locale entry in the XML output document). Currently, the supported values for requested_locale are en_US and NULL. If you specify a null value, the result is the same as specifying en_US. xml_input An input parameter of type BLOB(2G) that specifies an XML input document of type Data Server Message Input in UTF-8 that contains input values for the stored procedure.
1411
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
For an example of an XML input document that will not run in Complete mode, see Example 2 in the Examples section. Complete mode: For an example of an XML input document that is returned by the xml_output parameter when the stored procedure is running in Complete mode, see Example 1 in the Examples section. xml_filter An input parameter of type BLOB(4K) in UTF-8 that specifies a valid XPath query string. Use a filter when you want to retrieve a single value from an XML output document. For more information, see XPath expressions for filtering output on page 1390. The following example selects the value for the short message text from the XML output document:
/plist/dict/key[.='Short Message Text']/following-sibling::dict[1]/key [.='Value']/following-sibling::string[1]
If the key is not followed by the specified sibling, an error is returned. xml_output An output parameter of type BLOB(2G) that returns a complete XML output document of type Data Server Message Output in UTF-8. If a filter is specified, this parameter returns a string value. If the stored procedure is unable to return a complete output document (for example, if a processing error occurs that results in an SQL warning or error), this parameter is set to NULL. For an example of an XML output document, see Example 3. xml_message An output parameter of type BLOB(64K) that returns a complete XML output document of type Data Server Message in UTF-8 that provides detailed information about an SQL warning condition. This document is returned when a call to the procedure results in an SQL warning, and the warning message
1412
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
indicates that additional information is returned in the XML message output document. If the warning message does not indicate that additional information is returned, then this parameter is set to NULL. For an example of an XML message document, see Example 4.
Example
Example 1: The following example shows an XML input document that is returned by the xml_output parameter when the stored procedure is running in Complete mode.
<?xml version="1.0" encoding="UTF-8" ?> <plist version="1.0"> <dict> <key>Document Type Name</key> <string>Data Server Message Input</string> <key>Document Type Major Version</key> <integer>1</integer> <key>Document Type Minor Version</key> <integer>0</integer> <key>Document Locale</key> <string>en_US</string> <key>Required Parameters</key> <dict> <key>Display Name</key> <string>Required Parameters</string> <key>SQLCODE</key> <dict> <key>Display Name</key> <string>SQLCODE</string> <key>Value</key> <integer /> <key>Hint</key> <string /> </dict> <key>Hint</key> <string /> </dict> <key>Optional Parameters</key> <dict> <key>Display Name</key> <string>Optional Parameters</string> <key>Message Tokens</key> <dict> <key>Display Name</key> <string>Message Tokens</string> <key>Value</key> <array> <string /> </array> <key>Hint</key> <string /> </dict> <key>Hint</key> <string /> </dict> </dict> </plist>
Example 2: The following example shows a complete sample of an XML input document for the GET_MESSAGE stored procedure.
<?xml version="1.0" encoding="UTF-8"?> <plist version="1.0"> <dict>
Appendix J. DB2-supplied stored procedures
1413
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
<key>Document Type Name</key> <string>Data Server Message Input</string> <key>Document Type Major Version</key><integer>1</integer> <key>Document Type Minor Version</key><integer>0</integer> <key>Document Locale</key><string>en_US</string> <key>Required Parameters</key> <dict> <key>SQLCODE</key> <dict> <key>Value</key><integer>-104</integer> </dict> </dict> <key>Optional Parameters</key> <dict> <key>Message Tokens</key> <dict> <key>Value</key> <array> <string>X</string> <string>( . LIKE AS</string> </array> </dict> </dict> </dict> </plist>
Example 3: The following example shows a complete sample of an XML output document for the GET_MESSAGE stored procedure. The short message text for an SQLCODE will be encapsulated in a dictionary entry, which is comprised of Display Name, Value, and Hint.
<?xml version="1.0" encoding="UTF-8"?> <plist version="1.0"> <dict> <key>Document Type Name</key> <string>Data Server Message Output</string> <key>Document Type Major Version</key><integer>1</integer> <key>Document Type Minor Version</key><integer>0</integer> <key>Data Server Product Name</key><string>DSN</string> <key>Data Server Product Version</key><string>8.1.5</string> <key>Data Server Major Version</key><integer>8</integer> <key>Data Server Minor Version</key><integer>1</integer> <key>Data Server Platform</key><string>z/OS</string> <key>Document Locale</key><string>en_US</string> <key>Short Message Text</key> <dict> <key>Display Name</key><string>Short Message Text</string> <key>Hint</key><string /> </dict> </dict> </plist>
Example 4: The following example shows a sample XML message document for the GET_MESSAGE stored procedure. Similar to an XML output document, the details about an SQL warning condition will be encapsulated in a dictionary entry, which is comprised of Display Name, Value, and Hint.
<?xml version="1.0" encoding="UTF-8" ?> <plist version="1.0"> <dict> <key>Document Type Name</key><string>Data Server Message</string> <key>Document Type Major Version</key><integer>1</integer> <key>Document Type Minor Version</key><integer>0</integer> <key>Data Server Product Name</key><string>DSN</string>
1414
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
<key>Data Server Product Version</key><string>8.1.5</string> <key>Data Server Major Version</key><integer>8</integer> <key>Data Server Minor Version</key><integer>1</integer> <key>Data Server Platform</key><string>z/OS</string> <key>Document Locale</key><string>en_US</string> <key>Short Message Text</key> <dict> <key>Display Name</key><string>Short Message Text</string> <key>Value</key> <string>DSNA630I DSNADMGM A PARAMETER FORMAT OR CONTENT ERROR WAS FOUND. The value for key 'Document Type Minor Version' is '2'. It does not match the value '0', which was specified for parameter 2 of the stored procedure. Both values must be equal.</string> <key>Hint</key><string /> </dict> </dict> </plist>
Example 5: This example shows a simple and static Java program that calls the GET_MESSAGE stored procedure with an XML input document and an XPath that queries the short message text of an SQLCODE. The XML input document is initially saved as a file called xml_input.xml that is in the same directory where the GetMessageDriver class resides. This sample program uses the following xml_input.xml file:
<?xml version="1.0" encoding="UTF-8" ?> <plist version="1.0"> <dict> <key>Document Type Name</key> <string>Data Server Message Input</string> <key>Document Type Major Version</key> <integer>1</integer> <key>Document Type Minor Version</key> <integer>0</integer> <key>Document Locale</key> <string>en_US</string> <key>Complete</key> <false /> <key>Required Parameters</key> <dict> <key>SQLCODE</key> <dict> <key>Value</key> <integer>-204</integer> </dict> </dict> <key>Optional Parameters</key> <dict> <key>Message Tokens</key> <dict> <key>Value</key> <array> <string>SYSIBM.DDF_CONFIG</string> </array> </dict> </dict> </dict> </plist>
The XPath is statically created as a string object by the program and then converted to a BLOB to serve as input for the xml_filter parameter. After the stored procedure is called, the xml_output parameter contains only a single string and no XML document. This output is materialized as a file called xml_output.xml that is in the same directory where the GetMessageDriver class resides.
Appendix J. DB2-supplied stored procedures
1415
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Sample invocation of the GET_MESSAGE stored procedure with a valid XML input document and a valid XPath:
//*************************************************************************** // Licensed Materials - Property of IBM // 5625-DB2 // (C) COPYRIGHT 1982, 2006 IBM Corp. All Rights Reserved. // // STATUS = Version 8 //*************************************************************************** // Source file name: GetSystemDriver.java // // Sample: How to call SYSPROC.GET_SYSTEM_INFO with a valid XML input document // and a valid XPath to extract the operating system name and release. // // The user runs the program by issuing: // java GetSystemDriver <alias or //server/database> <userid> <password> // // The arguments are: // <alias> - DB2 subsystem alias for type 2 or //server/database for type 4 // connectivity // <userid> - user ID to connect as // <password> - password to connect with //*************************************************************************** import java.io.*; import java.sql.*; public class GetSystemDriver { public static void main (String[] args) { Connection con = null; CallableStatement cstmt = null; String driver = "com.ibm.db2.jcc.DB2Driver"; String url = "jdbc:db2:"; String userid = null; String password = null; // Parse arguments if (args.length != 3) { System.err.println("Usage: GetSystemDriver <alias or //server/database> <userid> <password>"); System.err.println("where <alias or //server/database> is DB2 subsystem alias or //server/database for type 4 connectivity"); System.err.println(" <userid> is user ID to connect as"); System.err.println(" <password> is password to connect with"); return; } url += args[0]; userid = args[1]; password = args[2]; try { String str_xmlfilter = new String( "/plist/dict/key[.='Operating System Information']/following-sibling:: dict[1]" + "/key[.='Name and Release']/following-sibling::dict[1]" + "/key[.='Value']/following-sibling::string[1]"); // Convert XML_FILTER to byte array to pass as BLOB byte[] xml_filter = str_xmlfilter.getBytes("UTF-8"); // Read XML_INPUT from file File fptr = new File("xml_input.xml");
1416
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
int file_length = (int)fptr.length(); byte[] xml_input = new byte[file_length]; FileInputStream instream = new FileInputStream(fptr); int tot_bytes = instream.read(xml_input,0, xml_input.length); if (tot_bytes == -1) { System.out.println("Error during file read"); return; } instream.close(); // Load the DB2 Universal JDBC Driver Class.forName(driver); // Connect to database con = DriverManager.getConnection(url, userid, password); con.setAutoCommit(false); cstmt = con.prepareCall("CALL SYSPROC.GET_SYSTEM_INFO(?,?,?,?,?,?,?)"); // Major / Minor Version / Requested Locale cstmt.setInt(1, 1); cstmt.setInt(2, 1); cstmt.setString(3, "en_US"); // Input documents cstmt.setObject(4, xml_input, Types.BLOB); cstmt.setObject(5, xml_filter, Types.BLOB); // Output Parms cstmt.registerOutParameter(1, cstmt.registerOutParameter(2, cstmt.registerOutParameter(6, cstmt.registerOutParameter(7, cstmt.execute(); con.commit(); SQLWarning ctstmt_warning = cstmt.getWarnings(); if (ctstmt_warning != null) { System.out.println("SQL Warning: " + ctstmt_warning.getMessage()); } else { System.out.println("SQL Warning: None\r\n"); } System.out.println("Major Version returned " + cstmt.getInt(1) ); System.out.println("Minor Version returned " + cstmt.getInt(2) ); // Get output BLOBs Blob b_out = cstmt.getBlob(6); if(b_out != null) { int out_length = (int)b_out.length(); byte[] bxml_output = new byte[out_length]; // Open an inputstream on BLOB data InputStream instr_out = b_out.getBinaryStream(); // Copy from inputstream into byte array int out_len = instr_out.read(bxml_output, 0, out_length); // Write byte array content into FileOutputStream FileOutputStream fxml_out = new FileOutputStream("xml_output.xml"); fxml_out.write(bxml_output, 0, out_length );
Appendix J. DB2-supplied stored procedures
1417
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
//Close streams instr_out.close(); fxml_out.close(); } Blob b_msg = cstmt.getBlob(7); if(b_msg != null) { int msg_length = (int)b_msg.length(); byte[] bxml_message = new byte[msg_length]; // Open an inputstream on BLOB data InputStream instr_msg = b_msg.getBinaryStream(); // Copy from inputstream into byte array int msg_len = instr_msg.read(bxml_message, 0, msg_length); // Write byte array content into FileOutputStream FileOutputStream fxml_msg = new FileOutputStream(new File ("xml_message.xml")); fxml_msg.write(bxml_message, 0, msg_length); //Close streams instr_msg.close(); fxml_msg.close(); } } catch (SQLException sqle) { System.out.println("Error during CALL " + " SQLSTATE = " + sqle.getSQLState() + " SQLCODE = " + sqle.getErrorCode() + " : " + sqle.getMessage()); } catch (Exception e) { System.out.println("Internal Error " + e.toString()); } finally { if(cstmt != null) try { cstmt.close(); } catch ( SQLException sqle) { sqle.printStackTrace(); } if(con != null) try { con.close(); } catch ( SQLException sqle) { sqle.printStackTrace(); } } } }
1418
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v Workload Manager (WLM) classification rules that apply to DB2 Workload for subsystem types DB2 and DDF
Environment
The load module for the GET_SYSTEM_INFO stored procedure, DSNADMGS, must reside in an APF-authorized library. The GET_SYSTEM_INFO stored procedure runs in a WLM-established stored procedures address space, and all of the libraries that are specified in the STEPLIB DD statement must be APF-authorized. TCB=1 is also required.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have EXECUTE privilege on the GET_SYSTEM_INFO stored procedure. In addition, because the GET_SYSTEM_INFO stored procedure queries the SMPCSI data set for the status of the SYSMODs, the authorization ID that is associated with the stored procedure address space where the GET_SYSTEM_INFO stored procedure is running must have at least RACF read authority to the SMPCSI data set.
Syntax
minor_version NULL
xml_filter NULL
, xml_output , xml_message )
Option descriptions
major_version An input and output parameter of type INTEGER that indicates the major document version. On input, this parameter indicates the major document version that you support for the XML documents passed as parameters in the stored procedure (xml_input, xml_output, and xml_message). The stored procedure processes all XML documents in the specified version, or returns an error (-20457) if the version is invalid. On output, this parameter specifies the highest major document version that is supported by the stored procedure. To determine the highest supported document version, specify NULL for this input parameter and all other required parameters. Currently, the highest and the only major document version that is supported is 1. If the XML document in the xml_input parameter specifies a Document Type Major Version key, the value for that key must be equal to the value that is provided in the major_version parameter, or an error (+20458) is raised.
1419
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
This parameter is used in conjunction with the minor_version parameter. Therefore, you must specify both parameters together. For example, you must specify both as either NULL, or non-NULL. minor_version An input and output parameter of type INTEGER that indicates the minor document version. On input, this parameter specifies the minor document version that you support for the XML documents passed as parameters for this stored procedure (xml_input, xml_output, and xml_message). The stored procedure processes all XML documents in the specified version, or returns an error (-20457) if the version is invalid. On output, this parameter indicates the highest minor document version that is supported for the highest supported major version. To determine the highest supported document version, specify NULL for this input parameter and all other required parameters. The highest minor document version that is supported is 1. Minor document version 0 (zero) is also supported. If the XML document in the xml_input parameter specifies a Document Type Minor Version key, the value for that key must be equal to the value that is provided in the minor_version parameter, or an error (+20458) is raised. This parameter is used in conjunction with the major_version parameter. Therefore, you must specify both parameters together. For example, you must specify both as either NULL, or non-NULL. requested_locale An input parameter of type VARCHAR(33) that specifies a locale. If the specified language is supported on the server, translated content is returned in the xml_output and xml_message parameters. Otherwise, content is returned in the default language. Only the language and possibly the territory information is used from the locale. The locale is not used to format numbers or influence the document encoding. For example, key names are not translated. The only translated portion of the XML output and XML message documents are Display Name, Display Unit, and Hint. The value might be globalized where applicable. You should always compare the requested language to the language that is used in the XML output document (see the Document Locale entry in the XML output document). Currently, the supported values for requested_locale are en_US and NULL. If you specify a null value, the result is the same as specifying en_US. xml_input An input parameter of type BLOB(2G) that specifies an XML input document of type Data Server System Input in UTF-8 that contains input values for the stored procedure. This XML input document is optional. If the XML input document is not passed to the stored procedure, the stored procedure returns the following information by default: v Operating system information v Product information v DB2 MEPL v Workload Manager (WLM) classification rules for DB2 Workload This stored procedure supports two types of XML input documents, Version 1.0 or Version 1.1. For Version 1.0, the general structure of an XML input document is as follows:
1420
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
<?xml version="1.0" encoding="UTF-8"?> <plist version="1.0"> <dict> <key>Document Type Name</key><string>Data Server System Input</string> <key>Document Type Major Version</key><integer>1</integer> <key>Document Type Minor Version</key><integer>0</integer> <key>Document Locale</key><string>en_US</string> <key>Complete</key><false/> <key>Optional Parameters</key> <dict> <key>SMPCSI Data Set</key> <dict> <key>Value</key><string>SMPCSI data set name</string> </dict> <key>SYSMOD</key> <dict> <key>Value</key> <array> <string>SYSMOD number</string> <string>SYSMOD number</string> </array> </dict> </dict> </dict> </plist>
For Version 1.1, the general structure of an XML input document is as follows:
<?xml version="1.0" encoding="UTF-8"?> <plist version="1.0"> <dict> <key>Document Type Name</key><string>Data Server System Input</string> <key>Document Type Major Version</key><integer>1</integer> <key>Document Type Minor Version</key><integer>1</integer> <key>Document Locale</key><string>en_US</string> <key>Complete</key><false/> <key>Optional Parameters</key> <dict> <key>Include</key> <dict> <key>Value</key> <array> <string>Operating System Information</string> <string>Product Information</string> <string>DB2 MEPL</string> <string>Workload Manager (WLM) Classification Rules for DB2 Workload</string> </array> </dict> <key>SMPCSI Data Set</key> <dict> <key>Value</key><string>SMPCSI data set name</string> </dict> <key>SYSMOD</key> <dict> <key>Value</key> <array> <string>SYSMOD number</string> <string>SYSMOD number</string> </array> </dict> </dict> </dict> </plist>
Version 1.0: When a Version 1.0 XML input document is passed to the stored procedure, the stored procedure returns the following information in a Version 1.0 XML output document:
Appendix J. DB2-supplied stored procedures
1421
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Operating system information Product information DB2 MEPL SYSMOD status (APPLY status for the SYSMODs that are listed in the XML input document) v Workload Manager (WLM) classification rules for DB2 Workload v v v v To use Version 1.0 of the XML input document you must specify the major_version parameter as 1 and the minor_version parameter as 0 (zero). You must also specify the Document Type Name key, the SMPCSI data set, and the list of SYSMODs. For an example of a complete Version 1.0 XML input document for the GET_SYSTEM_INFO stored procedure, see Example 3 in the Examples section. Version 1.1: A Version 1.1 XML input document supports the Include parameter, in addition to the SMPCSI Data Set and SYSMOD parameters that are supported by a Version 1.0 XML input document. You can use the Version 1.1 XML input document in the following ways: v To specify which items to include in the XML output document by specifying these items in the Include array v To specify the SMPCSI data set and list of SYSMODs so that the stored procedure returns their APPLY status To use Version 1.1 of the XML input document, you must specify the major_version parameter as 1 and the minor_version parameter as 1. You must also specify the Document Type Name key, and at least one of the following parameters: v Include v SMPCSI Data Set and SYSMOD If you pass a Version 1.1 XML input document to the stored procedure and specify the Include, SMPCSI Data Set, and SYSMOD parameters, the stored procedure will return the items that you specified in the Include array, and the SYSMOD status of the SYSMODs that you specified in the SYSMOD array. If you pass a Version 1.1 XML input document to the stored procedure and specify the Include parameter only, the stored procedure will return only the items that you specified in the Include array. If you pass a Version 1.1 XML input document to the stored procedure and specify only the SMPCSI Data Set and SYSMOD parameters, the stored procedure returns the following information in a Version 1.1 XML output document: v Operating system information v Product information v DB2 MEPL v SYSMOD status (APPLY status for the SYSMODs that are listed in the XML input document) v Workload Manager (WLM) classification rules for DB2 Workload For an example of a complete Version 1.1 XML input document for the GET_SYSTEM_INFO stored procedure, see Example 4.
1422
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Complete mode: For examples of Version 1.0 and Version 1.1 XML input documents that are returned by the xml_output parameter when the stored procedure is running in Complete mode, see Example 1 and Example 2 respectively. xml_filter An input parameter of type BLOB(4K) in UTF-8 that specifies a valid XPath query string. Use a filter when you want to retrieve a single value from an XML output document. For more information, see XPath expressions for filtering output on page 1390. The following example selects the value for the Data Server Product Version from the XML output document:
/plist/dict/key[.='Data Server Product Version']/following-sibling::string[1]
If the key is not followed by the specified sibling, an error is returned. xml_output An output parameter of type BLOB(2G) that returns a complete XML output document of type Data Server System Output in UTF-8. If a filter is specified, this parameter returns a string value. If the stored procedure is unable to return a complete output document (for example, if a processing error occurs that results in an SQL warning or error), this parameter is set to NULL. The xml_output parameter can return either a Version 1.0 or Version 1.1 XML output document depending on the major_version and minor_version parameters that you specify. For more information about the content differences between the Version 1.0 and Version 1.1 XML output documents, see the option description for the xml_input parameter. A complete XML output document provides the following system information: v Operating system information v Product information v DB2 MEPL v The APPLY status of SYSMODs v Workload Manager (WLM) classification rules for DB2 Workload for subsystem types DB2 and DDF For an example of an XML output document, see Example 5. xml_message An output parameter of type BLOB(64K) that returns a complete XML output document of type Data Server Message in UTF-8 that provides detailed information about a SQL warning condition. This document is returned when a call to the stored procedure results in a SQL warning, and the warning message indicates that additional information is returned in the XML message output document. If the warning message does not indicate that additional information is returned, then this parameter is set to NULL. The xml_message parameter can return either a Version 1.0 or Version 1.1 XML message document, depending on the major_version and minor_version parameters that you specify. The format of a Version 1.0 or Version 1.1. XML message document is similar. For an example of an XML message document, see Example 6.
1423
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Examples
Example 1: The following example shows a Version 1.0 XML input document that is returned by the xml_output parameter when the stored procedure is running in Complete mode.
<?xml version="1.0" encoding="UTF-8" ?> <plist version="1.0"> <dict> <key>Document Type Name</key><string>Data Server System Input</string> <key>Document Type Major Version</key><integer>1</integer> <key>Document Type Minor Version</key><integer>0</integer> <key>Document Locale</key><string>en_US</string> <key>Optional Parameters</key> <dict> <key>Display Name</key><string>Optional Parameters</string> <key>SMPCSI Data Set</key> <dict> <key>Display Name</key><string>SMPCSI Data Set</string> <key>Value</key><string /> <key>Hint</key><string /> </dict> <key>SYSMOD</key> <dict> <key>Display Name</key><string>SYSMOD</string> <key>Value</key> <array> <string /> </array> <key>Hint</key><string /> </dict> <key>Hint</key><string /> </dict> </dict> </plist>
Example 2: The following example shows a Version 1.1 XML input document that is returned by the xml_output parameter when the stored procedure is running in Complete mode.
<?xml version="1.0" encoding="UTF-8" ?> <plist version="1.0"> <dict> <key>Document Type Name</key><string>Data Server System Input</string> <key>Document Type Major Version</key><integer>1</integer> <key>Document Type Minor Version</key><integer>1</integer> <key>Document Locale</key><string>en_US</string> <key>Optional Parameters</key> <dict> <key>Display Name</key><string>Optional Parameters</string> <key>Include</key> <dict> <key>Display Name</key><string>Include</string> <key>Value</key> <array> <string>Operating System Information</string> <string>Product Information</string> <string>DB2 MEPL</string> <string>Workload Manager (WLM) Classification Rules for DB2 Workload</string> </array> <key>Hint</key><string /> </dict> <key>SMPCSI Data Set</key> <dict> <key>Display Name</key><string>SMPCSI Data Set</string> <key>Value</key><string />
1424
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
<key>Hint</key><string /> </dict> <key>SYSMOD</key> <dict> <key>Display Name</key><string>SYSMOD</string> <key>Value</key> <array> <string /> </array> <key>Hint</key><string /> </dict> <key>Hint</key><string /> </dict> </dict> </plist>
Example 3: The following example shows a complete sample of a Version 1.0 XML input document for the GET_SYSTEM_INFO stored procedure.
<?xml version="1.0" encoding="UTF-8"?> <plist version="1.0"> <dict> <key>Document Type Name</key> <string>Data Server System Input</string> <key>Document Type Major Version</key><integer>1</integer> <key>Document Type Minor Version</key><integer>0</integer> <key>Document Locale</key><string>en_US</string> <key>Optional Parameters</key> <dict> <key>SMPCSI Data Set</key> <dict> <key>Value</key><string>IXM180.GLOBAL.CSI</string> </dict> <key>SYSMOD</key> <dict> <key>Value</key> <array> <string>UK20028</string> <string>UK20030</string> </array> </dict> </dict> </dict> </plist>
You must specify the SMPCSI data set and one or more SYSMODs. SYSMOD status information will be returned for only the SYSMODs that are listed in the Optional Parameters section, provided that the SMPCSI data set that you specify is valid. Example 4: The following example shows a complete sample of a Version 1.1 XML input document for the GET_SYSTEM_INFO stored procedure.
<?xml version="1.0" encoding="UTF-8"?> <plist version="1.0"> <dict> <key>Document Type Name</key><string>Data Server System Input</string> <key>Document Type Major Version</key><integer>1</integer> <key>Document Type Minor Version</key><integer>1</integer> <key>Document Locale</key><string>en_US</string> <key>Optional Parameters</key> <dict> <key>Include</key> <dict> <key>Value</key> <array>
Appendix J. DB2-supplied stored procedures
1425
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
<string>Operating System Information</string> <string>Product Information</string> <string>DB2 MEPL</string> <string>Workload Manager (WLM) Classification Rules for DB2 Workload</string> </array> </dict> <key>SMPCSI Data Set</key> <dict> <key>Value</key><string>IXM180.GLOBAL.CSI</string> </dict> <key>SYSMOD</key> <dict> <key>Value</key> <array> <string>UK24596</string> <string>UK24709</string> </array> </dict> </dict> </dict> </plist>
Example 5: The following example shows a fragment of an XML output document for the GET_SYSTEM_INFO stored procedure. In this example, the ellipsis (. . .) represent a dictionary entry that is comprised of Display Name, Value, and Hint, such as:
<dict> <key>Display Name</key> <string>Name</string> <key>Value</key> <string>JES2</string> <key>Hint</key> <string /> </dict> <?xml version="1.0" encoding="UTF-8" ?> <plist version="1.0"> <dict> <key>Document Type Name</key> <string>Data Server System Output</string> <key>Document Type Major Version</key> <integer>1</integer> <key>Document Type Minor Version</key> <integer>1</integer> <key>Data Server Product Name</key> <string>DSN</string> <key>Data Server Product Version</key> <string>8.1.5</string> <key>Data Server Major Version</key> <integer>8</integer> <key>Data Server Minor Version</key> <integer>1</integer> <key>Data Server Platform</key> <string>z/OS</string> <key>Document Locale</key> <string>en_US</string> <key>Operating System Information</key> <dict> <key>Display Name</key><string>Operating System Information</string> <key>Name and Release</key> ... <key>CPU</key> <dict> <key>Display Name</key><string>CPU</string> <key>Model</key>
1426
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
... <key>Number of Online CPUs</key> ... <key>Online CPUs</key> <dict> <key>Display Name</key><string>Online CPUs</string> <key>CPU ID 01</key> <dict> <key>Display Name</key><string>CPU ID 01</string> <key>Serial Number</key> ... <key>Hint</key><string /> </dict> <key>Hint</key><string /> </dict> <key>Hint</key><string /> </dict> <key>Real Storage Size</key> <dict> <key>Display Name</key><string>Real Storage Size</string> <key>Value</key><integer>256</integer> <key>Display Unit</key><string>MB</string> <key>Hint</key><string /> </dict> <key>Sysplex Name</key> <dict> <key>Display Name</key> <string>Sysplex Name</string> <key>Value</key> <string>XESDEV</string> <key>Hint</key> <string /> </dict> </dict> <key>Product Information</key> <dict> <key>Display Name</key><string>Product Information</string> <key>Primary Job Entry Subsystem</key> <dict> <key>Display Name</key><string>Primary Job Entry Subsystem</string> <key>Name</key> ... <key>Release</key> ... <key>Node Name</key> ... <key>Held Output Class</key> ... <key>Hint</key><string /> </dict> <key>Security Software</key> <dict> <key>Display Name</key><string>Security Software</string> <key>Name</key> ... <key>FMID</key> ... <key>Hint</key><string /> </dict> <key>DFSMS Release</key> ... <key>TSO Release</key> ... <key>VTAM Release</key> ... <key>Hint</key><string />
Appendix J. DB2-supplied stored procedures
1427
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
</dict> <key>DB2 MEPL</key> <dict> <key>Display Name</key><string>DB2 MEPL</string> <key>DSNUTILB</key> <dict> <key>Display Name</key><string>DSNUTILB</string> <key>DSNAA</key> <dict> <key>Display Name</key><string>DSNAA</string> <key>PTF Level</key> ... <key>PTF Apply Date</key> ... <key>Hint</key><string /> </dict> --- This is only a fragment of the utility modules that are returned by the GET_SYSTEM_INFO stored procedure. --<key>Hint</key><string></string> </dict> --- This is only a fragment of the DB2 MEPL information that is returned by the GET_SYSTEM_INFO stored procedure. --</dict> <key>SYSMOD Status</key> <dict> <key>Display Name</key><string>SYSMOD Status</string> <key>AA15195</key> <dict> <key>Display Name</key><string>AA15195</string> <key>Apply</key> ... <key>Apply Date</key> ... <key>Hint</key><string /> </dict> --- This is only a fragment of the SYSMOD status information that is returned by the GET_SYSTEM_INFO stored procedure. --</dict> <key>Workload Manager (WLM) Classification Rules for DB2 Workload</key> <dict> <key>Display Name</key> <string>Workload Manager (WLM) Classification Rules for DB2 Workload</string> <key>DB2</key> <dict> <key>Display Name</key><string>DB2</string> <key>Hint</key><string /> </dict> <key>DDF</key> <dict> <key>Display Name</key><string>DDF</string> <key>1.1.1</key> <dict> <key>Display Name</key><string>1.1.1</string> <key>Nesting Level</key> ... <key>Qualifier Type</key>
1428
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
... <key>Qualifier Type Full Name</key> ... <key>Qualifier Name</key> ... <key>Start Position</key> ... <key>Service Class</key> ... <key>Report Class</key> ... <key>Hint</key><string /> </dict> <key>2.1.1</key> <dict> --- This dictionary entry describes the second classification rule, and its format is the same as that of 1.1.1 above, which describes the first classification rule. --</dict> <key>Hint</key><string /> </dict> <key>Hint</key><string /> </dict> </dict> </plist>
Example 6: The following example shows a sample XML message document for the GET_SYSTEM_INFO stored procedure. Similar to an XML output document, the details about an SQL warning condition will be encapsulated in a dictionary entry, which is comprised of Display Name, Value, and Hint.
<?xml version="1.0" encoding="UTF-8" ?> <plist version="1.0"> <dict> <key>Document Type Name</key><string>Data Server Message</string> <key>Document Type Major Version</key><integer>1</integer> <key>Document Type Minor Version</key><integer>1</integer> <key>Data Server Product Name</key><string>DSN</string> <key>Data Server Product Version</key><string>8.1.5</string> <key>Data Server Major Version</key><integer>8</integer> <key>Data Server Minor Version</key><integer>1</integer> <key>Data Server Platform</key><string>z/OS</string> <key>Document Locale</key><string>en_US</string> <key>Short Message Text</key> <dict> <key>Display Name</key><string>Short Message Text</string> <key>Value</key> <string>DSNA647I DSNADMGS INVOCATION OF GIMAPI FAILED. Error processing command: QUERY . RC=12 CC=50504. GIM54701W ALLOCATION FAILED FOR SMPCSI - IKJ56228I DATA SET IXM180.GLOBAL.CSI NOT IN CATALOG OR CATALOG CAN NOT BE ACCESSED. GIM44232I GIMMPVIA - DYNAMIC ALLOCATION FAILED FOR THE GLOBAL ZONE, DATA SET IXM180.GLOBAL.CSI. GIM50504S ** OPEN PROCESSING FAILED FOR THE GLOBAL ZONE.</string> <key>Hint</key><string /> </dict> </dict> </plist>
Example 7: This example shows a simple and static Java program that calls the GET_SYSTEM_INFO stored procedure with an XML input document and an XPath that queries the value of the operating system name and release. The XML input document is initially saved as a file called xml_input.xml that is in the same directory where the GetSystemDriver class resides. This sample program uses the following xml_input.xml file:
Appendix J. DB2-supplied stored procedures
1429
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
<?xml version="1.0" encoding="UTF-8" ?> <plist version="1.0"> <dict> <key>Document Type Name</key> <string>Data Server System Input</string> <key>Document Type Major Version</key> <integer>1</integer> <key>Document Type Minor Version</key> <integer>1</integer> <key>Document Locale</key> <string>en_US</string> <key>Optional Parameters</key> <dict> <key>Include</key> <dict> <key>Value</key> <array> <string>Operating System Information</string> </array> </dict> </dict> </dict> </plist>
The XPath is statically created as a string object by the program and then converted to a BLOB to serve as input for the xml_filter parameter. After the stored procedure is called, the xml_output parameter contains only a single string and no XML document. This output is materialized as a file called xml_output.xml that is in the same directory where the GetSystemDriver class resides. Sample invocation of the GET_SYSTEM_INFO stored procedure with a valid XML input document and a valid XPath:
//*************************************************************************** // Licensed Materials - Property of IBM // 5625-DB2 // (C) COPYRIGHT 1982, 2006 IBM Corp. All Rights Reserved. // // STATUS = Version 8 //*************************************************************************** // Source file name: GetSystemDriver.java // // Sample: How to call SYSPROC.GET_SYSTEM_INFO with a valid XML input document // and a valid XPath to extract the operating system name and release. // // The user runs the program by issuing: // java GetSystemDriver <alias or //server/database> <userid> <password> // // The arguments are: // <alias> - DB2 subsystem alias for type 2 or //server/database for type 4 // connectivity // <userid> - user ID to connect as // <password> - password to connect with //*************************************************************************** import java.io.*; import java.sql.*; public class GetSystemDriver { public static void main (String[] args) { Connection con = null; CallableStatement cstmt = null; String driver = "com.ibm.db2.jcc.DB2Driver"; String url = "jdbc:db2:";
1430
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
String userid = null; String password = null; // Parse arguments if (args.length != 3) { System.err.println("Usage: GetSystemDriver <alias or //server/database> <userid> <password>"); System.err.println("where <alias or //server/database> is DB2 subsystem alias or //server/database for type 4 connectivity"); System.err.println(" <userid> is user ID to connect as"); System.err.println(" <password> is password to connect with"); return; } url += args[0]; userid = args[1]; password = args[2]; try { String str_xmlfilter = new String( "/plist/dict/key[.='Operating System Information']/following-sibling:: dict[1]" + "/key[.='Name and Release']/following-sibling::dict[1]" + "/key[.='Value']/following-sibling::string[1]"); // Convert XML_FILTER to byte array to pass as BLOB byte[] xml_filter = str_xmlfilter.getBytes("UTF-8"); // Read XML_INPUT from file File fptr = new File("xml_input.xml"); int file_length = (int)fptr.length(); byte[] xml_input = new byte[file_length]; FileInputStream instream = new FileInputStream(fptr); int tot_bytes = instream.read(xml_input,0, xml_input.length); if (tot_bytes == -1) { System.out.println("Error during file read"); return; } instream.close(); // Load the DB2 Universal JDBC Driver Class.forName(driver); // Connect to database con = DriverManager.getConnection(url, userid, password); con.setAutoCommit(false); cstmt = con.prepareCall("CALL SYSPROC.GET_SYSTEM_INFO(?,?,?,?,?,?,?)"); // Major / Minor Version / Requested Locale cstmt.setInt(1, 1); cstmt.setInt(2, 1); cstmt.setString(3, "en_US"); // Input documents cstmt.setObject(4, xml_input, Types.BLOB); cstmt.setObject(5, xml_filter, Types.BLOB); // Output Parms cstmt.registerOutParameter(1, cstmt.registerOutParameter(2, cstmt.registerOutParameter(6, cstmt.registerOutParameter(7, Types.INTEGER); Types.INTEGER); Types.BLOB); Types.BLOB);
1431
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
cstmt.execute(); con.commit(); SQLWarning ctstmt_warning = cstmt.getWarnings(); if (ctstmt_warning != null) { System.out.println("SQL Warning: " + ctstmt_warning.getMessage()); } else { System.out.println("SQL Warning: None\r\n"); } System.out.println("Major Version returned " + cstmt.getInt(1) ); System.out.println("Minor Version returned " + cstmt.getInt(2) ); // Get output BLOBs Blob b_out = cstmt.getBlob(6); if(b_out != null) { int out_length = (int)b_out.length(); byte[] bxml_output = new byte[out_length]; // Open an inputstream on BLOB data InputStream instr_out = b_out.getBinaryStream(); // Copy from inputstream into byte array int out_len = instr_out.read(bxml_output, 0, out_length); // Write byte array content into FileOutputStream FileOutputStream fxml_out = new FileOutputStream("xml_output.xml"); fxml_out.write(bxml_output, 0, out_length ); //Close streams instr_out.close(); fxml_out.close(); } Blob b_msg = cstmt.getBlob(7); if(b_msg != null) { int msg_length = (int)b_msg.length(); byte[] bxml_message = new byte[msg_length]; // Open an inputstream on BLOB data InputStream instr_msg = b_msg.getBinaryStream(); // Copy from inputstream into byte array int msg_len = instr_msg.read(bxml_message, 0, msg_length); // Write byte array content into FileOutputStream FileOutputStream fxml_msg = new FileOutputStream(new File("xml_message. xml")); fxml_msg.write(bxml_message, 0, msg_length); //Close streams instr_msg.close(); fxml_msg.close(); } } catch (SQLException sqle) { System.out.println("Error during CALL " + " SQLSTATE = " + sqle.getSQLState() + " SQLCODE = " + sqle.getErrorCode() + " : " + sqle.getMessage()); }
1432
Administration Guide
| | | | | | | | | | | | | | | | |
catch (Exception e) { System.out.println("Internal Error " + e.toString()); } finally { if(cstmt != null) try { cstmt.close(); } catch ( SQLException sqle) { sqle.printStackTrace(); } if(con != null) try { con.close(); } catch ( SQLException sqle) { sqle.printStackTrace(); } } } }
1433
1434
Administration Guide
1435
v How to write applications in the Java programming language to access DB2 servers The material needed for writing a host program containing SQL is in DB2 Application Programming and SQL Guide and in DB2 Application Programming Guide and Reference for Java. The material needed for writing applications that use DB2 ODBC or ODBC to access DB2 servers is in DB2 ODBC Guide and Reference. For handling errors, see DB2 Codes. If you will be working in a distributed environment, you will need DB2 Reference for Remote DRDA Requesters and Servers. Information about writing applications across operating systems can be found in IBM DB2 Universal Database SQL Reference for Cross-Platform Development. System and database administration: Administration covers almost everything else. If you will be using the RACF access control module for DB2 authorization checking, you will need DB2 RACF Access Control Module Guide. If you are involved with DB2 only to design the database, or plan operational procedures, you need DB2 Administration Guide. If you also want to carry out your own plans by creating DB2 objects, granting privileges, running utility jobs, and so on, you also need: v DB2 SQL Reference, which describes the SQL statements you use to create, alter, and drop objects and grant and revoke privileges v DB2 Utility Guide and Reference, which explains how to run utilities v DB2 Command Reference, which explains how to run commands If you will be using data sharing, you need DB2 Data Sharing: Planning and Administration, which describes how to plan for and implement data sharing. Additional information about system and database administration can be found in DB2 Messages and DB2 Codes, which list messages and codes issued by DB2, with explanations and suggested responses. Diagnosis: Diagnosticians detect and describe errors in the DB2 program. They might also recommend or apply a remedy. The documentation for this task is in DB2 Diagnosis Guide and Reference, DB2 Messages, and DB2 Codes.
1436
Administration Guide
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the users responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan, Ltd. 3-2-12, Roppongi, Minato-ku Tokyo 106-8711 Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.
1437
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation J46A/G4 555 Bailey Avenue San Jose, CA 95141-1003 U.S.A. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement, or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are provided AS IS, without warranty of any kind. IBM shall not be liable for any damages arising out of your use of the sample programs.
1438
Administration Guide
General-use Programming Interfaces allow the customer to write programs that obtain the services of DB2 UDB for z/OS. General-use Programming Interface and Associated Guidance Information is identified where it occurs, either by an introductory statement to a chapter or section or by the following marking: General-use Programming Interface General-use Programming Interface and Associated Guidance Information ... End of General-use Programming Interface Product-sensitive Programming Interfaces allow the customer installation to perform tasks such as diagnosing, modifying, monitoring, repairing, tailoring, or tuning of this IBM software product. Use of such interfaces creates dependencies on the detailed design or implementation of the IBM software product. Product-sensitive Programming Interfaces should be used only for these specialized purposes. Because of their dependencies on detailed design and implementation, it is to be expected that programs written to such interfaces may need to be changed in order to run with new product releases or versions, or as a result of service. Product-sensitive Programming Interface and Associated Guidance Information is identified where it occurs, either by an introductory statement to a chapter or section or by the following marking: Product-sensitive Programming Interface Product-sensitive Programming Interface and Associated Guidance Information ... End of Product-sensitive Programming Interface
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered marks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at http://www.ibm.com/ legal/copytrade.shtml. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.
Notices
1439
1440
Administration Guide
Glossary
The following terms and abbreviations are defined as they are used in the DB2 library.
after trigger. A trigger that is defined with the trigger activation time AFTER. agent. As used in DB2, the structure that associates all processes that are involved in a DB2 unit of work. An allied agent is generally synonymous with an allied thread. System agents are units of work that process tasks that are independent of the allied agent, such as prefetch processing, deferred writes, and service tasks.
A
abend. Abnormal end of task. abend reason code. A 4-byte hexadecimal code that uniquely identifies a problem with DB2. abnormal end of task (abend). Termination of a task, job, or subsystem because of an error condition that recovery facilities cannot resolve during execution. access method services. The facility that is used to define and reproduce VSAM key-sequenced data sets. access path. The path that is used to locate data that is specified in SQL statements. An access path can be indexed or sequential. active log. The portion of the DB2 log to which log records are written as they are generated. The active log always contains the most recent log records, whereas the archive log holds those records that are older and no longer fit on the active log. active member state. A state of a member of a data sharing group. The cross-system coupling facility identifies each active member with a group and associates the member with a particular task, address space, and z/OS system. A member that is not active has either a failed member state or a quiesced member state. address space. A range of virtual storage pages that is identified by a number (ASID) and a collection of segment and page tables that map the virtual pages to real pages of the computers memory. address space connection. The result of connecting an allied address space to DB2. Each address space that contains a task that is connected to DB2 has exactly one address space connection, even though more than one task control block (TCB) can be present. See also allied address space and task control block.
# aggregate function. An operation that derives its # result by using values from one or more rows. Contrast # with scalar function.
alias. An alternative name that can be used in SQL statements to refer to a table or view in the same or a remote DB2 subsystem. allied address space. An area of storage that is external to DB2 and that is connected to DB2. An allied address space is capable of requesting DB2 services. allied thread. A thread that originates at the local DB2 subsystem and that can access data at a remote DB2 subsystem. allocated cursor. A cursor that is defined for stored procedure result sets by using the SQL ALLOCATE CURSOR statement. already verified. An LU 6.2 security option that allows DB2 to provide the users verified authorization ID when allocating a conversation. With this option, the user is not validated by the partner DB2 subsystem.
| | | | | | | | |
ambiguous cursor. A database cursor that is in a plan or package that contains either PREPARE or EXECUTE IMMEDIATE SQL statements, and for which the following statements are true: the cursor is not defined with the FOR READ ONLY clause or the FOR UPDATE OF clause; the cursor is not defined on a read-only result table; the cursor is not the target of a WHERE CURRENT clause on an SQL UPDATE or DELETE statement. American National Standards Institute (ANSI). An organization consisting of producers, consumers, and general interest groups, that establishes the procedures by which accredited organizations create and maintain voluntary industry standards in the United States. ANSI. American National Standards Institute. APAR. Authorized program analysis report. APAR fix corrective service. A temporary correction of an IBM software defect. The correction is temporary,
| |
address space identifier (ASID). A unique system-assigned identifier for and address space. administrative authority. A set of related privileges that DB2 defines. When you grant one of the administrative authorities to a persons ID, the person has all of the privileges that are associated with that administrative authority.
1441
because it is usually replaced at a later date by a more permanent correction, such as a program temporary fix (PTF). APF. Authorized program facility. API. Application programming interface. APPL. A VTAM network definition statement that is used to define DB2 to VTAM as an application program that uses SNA LU 6.2 protocols. application. A program or set of programs that performs a task; for example, a payroll application. application-directed connection. A connection that an application manages using the SQL CONNECT statement. application plan. The control structure that is produced during the bind process. DB2 uses the application plan to process SQL statements that it encounters during statement execution. application process. The unit to which resources and locks are allocated. An application process involves the execution of one or more programs. application programming interface (API). A functional interface that is supplied by the operating system or by a separately orderable licensed program that allows an application program that is written in a high-level language to use specific data or functions of the operating system or licensed program. application requester. The component on a remote system that generates DRDA requests for data on behalf of an application. An application requester accesses a DB2 database server using the DRDA application-directed protocol. application server. The target of a request from a remote application. In the DB2 environment, the application server function is provided by the distributed data facility and is used to access DB2 data from remote applications. archive log. The portion of the DB2 log that contains log records that have been copied from the active log. ASCII. An encoding scheme that is used to represent strings in many environments, typically on PCs and workstations. Contrast with EBCDIC and Unicode.
authorization ID. A string that can be verified for connection to DB2 and to which a set of privileges is allowed. It can represent an individual, an organizational group, or a function, but DB2 does not determine this representation. authorized program analysis report (APAR). A report of a problem that is caused by a suspected defect in a current release of an IBM supplied program. authorized program facility (APF). A facility that permits the identification of programs that are authorized to use restricted functions.
| | | | | | |
automatic query rewrite. A process that examines an SQL statement that refers to one or more base tables, and, if appropriate, rewrites the query so that it performs better. This process can also determine whether to rewrite a query so that it refers to one or more materialized query tables that are derived from the source tables. auxiliary index. An index on an auxiliary table in which each index entry refers to a LOB. auxiliary table. A table that stores columns outside the table in which they are defined. Contrast with base table.
B
backout. The process of undoing uncommitted changes that an application process made. This might be necessary in the event of a failure on the part of an application process, or as a result of a deadlock situation. backward log recovery. The fourth and final phase of restart processing during which DB2 scans the log in a backward direction to apply UNDO log records for all aborted changes. base table. (1) A table that is created by the SQL CREATE TABLE statement and that holds persistent data. Contrast with result table and temporary table. (2) A table containing a LOB column definition. The actual LOB column data is not stored with the base table. The base table contains a row identifier for each row and an indicator column for each of its LOB columns. Contrast with auxiliary table. base table space. A table space that contains base tables. basic predicate. A predicate that compares two values. basic sequential access method (BSAM). An access method for storing or retrieving data blocks in a continuous sequence, using either a sequential-access or a direct-access device.
1442
Administration Guide
| | | |
batch message processing program. In IMS, an application program that can perform batch-type processing online and can access the IMS input and output message queues. before trigger. A trigger that is defined with the trigger activation time BEFORE. binary integer. A basic data type that can be further classified as small integer or large integer.
BSDS. Bootstrap data set. buffer pool. Main storage that is reserved to satisfy the buffering requirements for one or more table spaces or indexes. built-in data type. A data type that IBM supplies. Among the built-in data types for DB2 UDB for z/OS are string, numeric, ROWID, and datetime. Contrast with distinct type. built-in function. A function that DB2 supplies. Contrast with user-defined function. business dimension. A category of data, such as products or time periods, that an organization might want to analyze.
# binary large object (BLOB). A sequence of bytes in # which the size of the value ranges from 0 bytes to # 2 GB1. Such a string has a CCSID value of 65535.
binary string. A sequence of bytes that is not associated with a CCSID. For example, the BLOB data type is a binary string. bind. The process by which the output from the SQL precompiler is converted to a usable control structure, often called an access plan, application plan, or package. During this process, access paths to the data are selected and some authorization checking is performed. The types of bind are: automatic bind. (More correctly, automatic rebind) A process by which SQL statements are bound automatically (without a user issuing a BIND command) when an application process begins execution and the bound application plan or package it requires is not valid. dynamic bind. A process by which SQL statements are bound as they are entered. incremental bind. A process by which SQL statements are bound during the execution of an application process. static bind. A process by which SQL statements are bound after they have been precompiled. All static SQL statements are prepared for execution at the same time.
C
cache structure. A coupling facility structure that stores data that can be available to all members of a Sysplex. A DB2 data sharing group uses cache structures as group buffer pools. CAF. Call attachment facility. call attachment facility (CAF). A DB2 attachment facility for application programs that run in TSO or z/OS batch. The CAF is an alternative to the DSN command processor and provides greater control over the execution environment. call-level interface (CLI). A callable application programming interface (API) for database access, which is an alternative to using embedded SQL. In contrast to embedded SQL, DB2 ODBC (which is based on the CLI architecture) does not require the user to precompile or bind applications, but instead provides a standard set of functions to process SQL statements and related services at run time. cascade delete. The way in which DB2 enforces referential constraints when it deletes all descendent rows of a deleted parent row. CASE expression. An expression that is selected based on the evaluation of one or more conditions. cast function. A function that is used to convert instances of a (source) data type into instances of a different (target) data type. In general, a cast function has the name of the target data type. It has one single argument whose type is the source data type; its return type is the target data type. castout. The DB2 process of writing changed pages from a group buffer pool to disk. castout owner. The DB2 member that is responsible for casting out a particular page set or partition.
# bit data. Data that is character type CHAR or # VARCHAR and has a CCSID value of 65535.
BLOB. Binary large object. block fetch. A capability in which DB2 can retrieve, or fetch, a large set of rows together. Using block fetch can significantly reduce the number of messages that are being sent across the network. Block fetch applies only to cursors that do not update data. BMP. Batch Message Processing (IMS). See batch message processing program. bootstrap data set (BSDS). A VSAM data set that contains name and status information for DB2, as well as RBA range specifications, for all active and archive log data sets. It also contains passwords for the DB2 directory and catalog, and lists of conditional restart and checkpoint records. BSAM. Basic sequential access method.
Glossary
1443
catalog. In DB2, a collection of tables that contains descriptions of objects such as tables, views, and indexes. catalog table. Any table in the DB2 catalog. CCSID. Coded character set identifier. CDB. Communications database. CDRA. Character Data Representation Architecture. CEC. Central electronic complex. See central processor complex. central electronic complex (CEC). See central processor complex. central processor (CP). The part of the computer that contains the sequencing and processing facilities for instruction execution, initial program load, and other machine operations. central processor complex (CPC). A physical collection of hardware (such as an ES/3090) that consists of main storage, one or more central processors, timers, and channels.
| | | |
check pending. A state of a table space or partition that prevents its use by some utilities and by some SQL statements because of rows that violate referential constraints, check constraints, or both. checkpoint. A point at which DB2 records internal status information on the DB2 log; the recovery process uses this information if DB2 abnormally terminates.
| | | |
child lock. For explicit hierarchical locking, a lock that is held on either a table, page, row, or a large object (LOB). Each child lock has a parent lock. See also parent lock. CI. Control interval.
| CICS. Represents (in this publication): CICS | Transaction Server for z/OS: Customer Information | Control System Transaction Server for z/OS.
CICS attachment facility. A DB2 subcomponent that uses the z/OS subsystem interface (SSI) and cross-storage linkage to process requests from CICS to DB2 and to coordinate resource commitment. CIDF. Control interval definition field. claim. A notification to DB2 that an object is being accessed. Claims prevent drains from occurring until the claim is released, which usually occurs at a commit point. Contrast with drain. claim class. A specific type of object access that can be one of the following isolation levels: Cursor stability (CS) Repeatable read (RR) Write claim count. A count of the number of agents that are accessing an object. class of service. A VTAM term for a list of routes through a network, arranged in an order of preference for their use. class word. A single word that indicates the nature of a data attribute. For example, the class word PROJ indicates that the attribute identifies a project. clause. In SQL, a distinct part of a statement, such as a SELECT clause or a WHERE clause. CLI. Call- level interface. client. See requester. CLIST. Command list. A language for performing TSO tasks. CLOB. Character large object. closed application. An application that requires exclusive use of certain statements on certain DB2
1444
Administration Guide
objects, so that the objects are managed solely through the applications external interface. CLPA. Create link pack area.
command. A DB2 operator command or a DSN subcommand. A command is distinct from an SQL statement. command prefix. A one- to eight-character command identifier. The command prefix distinguishes the command as belonging to an application or subsystem rather than to MVS. command recognition character (CRC). A character that permits a z/OS console operator or an IMS subsystem user to route DB2 commands to specific DB2 subsystems. command scope. The scope of command operation in a data sharing group. If a command has member scope, the command displays information only from the one member or affects only non-shared resources that are owned locally by that member. If a command has group scope, the command displays information from all members, affects non-shared resources that are owned locally by all members, displays information on sharable resources, or affects sharable resources. commit. The operation that ends a unit of work by releasing locks so that the database changes that are made by that unit of work can be perceived by other processes. commit point. A point in time when data is considered consistent. committed phase. The second phase of the multisite update process that requests all participants to commit the effects of the logical unit of work. common service area (CSA). In z/OS, a part of the common area that contains data areas that are addressable by all address spaces. communications database (CDB). A set of tables in the DB2 catalog that are used to establish conversations with remote database management systems. comparison operator. A token (such as =, >, or <) that is used to specify a relationship between two values. composite key. An ordered set of key columns of the same table. compression dictionary. The dictionary that controls the process of compression and decompression. This dictionary is created from the data in the table space or table space partition. concurrency. The shared use of resources by more than one application process at the same time. conditional restart. A DB2 restart that is directed by a user-defined conditional restart control record (CRCR). connection. In SNA, the existence of a communication path between two partner LUs that allows information
| | | | | | | |
clustering index. An index that determines how rows are physically ordered (clustered) in a table space. If a clustering index on a partitioned table is not a partitioning index, the rows are ordered in cluster sequence within each data partition instead of spanning partitions. Prior to Version 8 of DB2 UDB for z/OS, the partitioning index was required to be the clustering index. coded character set. A set of unambiguous rules that establish a character set and the one-to-one relationships between the characters of the set and their coded representations. coded character set identifier (CCSID). A 16-bit number that uniquely identifies a coded representation of graphic characters. It designates an encoding scheme identifier and one or more pairs consisting of a character set identifier and an associated code page identifier. code page. (1) A set of assignments of characters to code points. In EBCDIC, for example, the character 'A' is assigned code point X'C1' (2) , and character 'B' is assigned code point X'C2'. Within a code page, each code point has only one specific meaning. code point. In CDRA, a unique bit pattern that represents a character in a code page.
# # # # # #
code unit. The fundamental binary width in a computer architecture that is used for representing character data, such as 7 bits, 8 bits, 16 bits, or 32 bits. Depending on the character encoding form that is used, each code point in a coded character set can be represented internally by one or more code units. coexistence. During migration, the period of time in which two releases exist in the same data sharing group. cold start. A process by which DB2 restarts without processing any log records. Contrast with warm start. collection. A group of packages that have the same qualifier. column. The vertical component of a table. A column has a name and a particular data type (for example, character, decimal, or integer).
Glossary
1445
to be exchanged (for example, two DB2 subsystems that are connected and communicating by way of a conversation). connection context. In SQLJ, a Java object that represents a connection to a data source. connection declaration clause. In SQLJ, a statement that declares a connection to a data source. connection handle. The data object containing information that is associated with a connection that DB2 ODBC manages. This includes general status information, transaction status, and diagnostic information. connection ID. An identifier that is supplied by the attachment facility and that is associated with a specific address space connection. consistency token. A timestamp that is used to generate the version identifier for an application. See also version. constant. A language element that specifies an unchanging value. Constants are classified as string constants or numeric constants. Contrast with variable. constraint. A rule that limits the values that can be inserted, deleted, or updated in a table. See referential constraint, check constraint, and unique constraint. context. The applications logical connection to the data source and associated internal DB2 ODBC connection information that allows the application to direct its operations to a data source. A DB2 ODBC context represents a DB2 thread. contracting conversion. A process that occurs when the length of a converted string is smaller than that of the source string. For example, this process occurs when an EBCDIC mixed-data string that contains DBCS characters is converted to ASCII mixed data; the converted string is shorter because of the removal of the shift codes. control interval (CI). A fixed-length area or disk in which VSAM stores records and creates distributed free space. Also, in a key-sequenced data set or file, the set of records that an entry in the sequence-set index record points to. The control interval is the unit of information that VSAM transmits to or from disk. A control interval always includes an integral number of physical records. control interval definition field (CIDF). In VSAM, a field that is located in the 4 bytes at the end of each control interval; it describes the free space, if any, in the control interval. conversation. Communication, which is based on LU 6.2 or Advanced Program-to-Program Communication (APPC), between an application and a remote
transaction program over an SNA logical unit-to-logical unit (LU-LU) session that allows communication while processing a transaction. coordinator. The system component that coordinates the commit or rollback of a unit of work that includes work that is done on one or more other systems.
| | | | | | | | | | | | | | | |
copy pool. A named set of SMS storage groups that contains data that is to be copied collectively. A copy pool is an SMS construct that lets you define which storage groups are to be copied by using FlashCopy functions. HSM determines which volumes belong to a copy pool. copy target. A named set of SMS storage groups that are to be used as containers for copy pool volume copies. A copy target is an SMS construct that lets you define which storage groups are to be used as containers for volumes that are copied by using FlashCopy functions. copy version. A point-in-time FlashCopy copy that is managed by HSM. Each copy pool has a version parameter that specifies how many copy versions are maintained on disk. correlated columns. A relationship between the value of one column and the value of another column. correlated subquery. A subquery (part of a WHERE or HAVING clause) that is applied to a row or group of rows of a table or view that is named in an outer subselect statement. correlation ID. An identifier that is associated with a specific thread. In TSO, it is either an authorization ID or the job name. correlation name. An identifier that designates a table, a view, or individual rows of a table or view within a single SQL statement. It can be defined in any FROM clause or in the first clause of an UPDATE or DELETE statement. cost category. A category into which DB2 places cost estimates for SQL statements at the time the statement is bound. A cost estimate can be placed in either of the following cost categories: v A: Indicates that DB2 had enough information to make a cost estimate without using default values. v B: Indicates that some condition exists for which DB2 was forced to use default values for its estimate. The cost category is externalized in the COST_CATEGORY column of the DSN_STATEMNT_TABLE when a statement is explained. coupling facility. A special PR/SM LPAR logical partition that runs the coupling facility control program and provides high-speed caching, list processing, and locking functions in a Parallel Sysplex.
1446
Administration Guide
| | | | | |
coupling facility resource management. A component of z/OS that provides the services to manage coupling facility resources in a Parallel Sysplex. This management includes the enforcement of CFRM policies to ensure that the coupling facility and structure requirements are satisfied. CP. Central processor. CPC. Central processor complex. C++ member. A data object or function in a structure, union, or class. C++ member function. An operator or function that is declared as a member of a class. A member function has access to the private and protected data members and to the member functions of objects in its class. Member functions are also called methods. C++ object. (1) A region of storage. An object is created when a variable is defined or a new function is invoked. (2) An instance of a class. CRC. Command recognition character. CRCR. Conditional restart control record. See also conditional restart. create link pack area (CLPA). An option that is used during IPL to initialize the link pack pageable area. created temporary table. A table that holds temporary data and is defined with the SQL statement CREATE GLOBAL TEMPORARY TABLE. Information about created temporary tables is stored in the DB2 catalog, so this kind of table is persistent and can be shared across application processes. Contrast with declared temporary table. See also temporary table. cross-memory linkage. A method for invoking a program in a different address space. The invocation is synchronous with respect to the caller. cross-system coupling facility (XCF). A component of z/OS that provides functions to support cooperation between authorized programs that run within a Sysplex. cross-system extended services (XES). A set of z/OS services that allow multiple instances of an application or subsystem, running on different systems in a Sysplex environment, to implement high-performance, high-availability data sharing by using a coupling facility. CS. Cursor stability. CSA. Common service area. CT. Cursor table.
current data. Data within a host structure that is current with (identical to) the data within the base table. current SQL ID. An ID that, at a single point in time, holds the privileges that are exercised when certain dynamic SQL statements run. The current SQL ID can be a primary authorization ID or a secondary authorization ID. current status rebuild. The second phase of restart processing during which the status of the subsystem is reconstructed from information on the log. cursor. A named control structure that an application program uses to point to a single row or multiple rows within some ordered set of rows of a result table. A cursor can be used to retrieve, update, or delete rows from a result table. cursor sensitivity. The degree to which database updates are visible to the subsequent FETCH statements in a cursor. A cursor can be sensitive to changes that are made with positioned update and delete statements specifying the name of that cursor. A cursor can also be sensitive to changes that are made with searched update or delete statements, or with cursors other than this cursor. These changes can be made by this application process or by another application process. cursor stability (CS). The isolation level that provides maximum concurrency without the ability to read uncommitted data. With cursor stability, a unit of work holds locks only on its uncommitted changes and on the current row of each of its cursors. cursor table (CT). The copy of the skeleton cursor table that is used by an executing application process. cycle. A set of tables that can be ordered so that each table is a descendent of the one before it, and the first table is a descendent of the last table. A self-referencing table is a cycle with a single member.
D
| DAD. See Document access definition. | disk. A direct-access storage device that records data | magnetically.
database. A collection of tables, or a collection of table spaces and index spaces. database access thread. A thread that accesses data at the local subsystem on behalf of a remote subsystem. database administrator (DBA). An individual who is responsible for designing, developing, operating, safeguarding, maintaining, and using a database.
Glossary
1447
| | | | | | | | | | | | | | | | |
database alias. The name of the target server if different from the location name. The database alias name is used to provide the name of the database server as it is known to the network. When a database alias name is defined, the location name is used by the application to reference the server, but the database alias name is used to identify the database server to be accessed. Any fully qualified object names within any SQL statements are not modified and are sent unchanged to the database server. database descriptor (DBD). An internal representation of a DB2 database definition, which reflects the data definition that is in the DB2 catalog. The objects that are defined in a database descriptor are table spaces, tables, indexes, index spaces, relationships, check constraints, and triggers. A DBD also contains information about accessing tables in the database. database exception status. An indication that something is wrong with a database. All members of a data sharing group must know and share the exception status of databases.
Data Language/I (DL/I). The IMS data manipulation language; a common high-level interface between a user application and IMS. data mart. A small data warehouse that applies to a single department or team. See also data warehouse. data mining. The process of collecting critical business information from a data warehouse, correlating it, and uncovering associations, patterns, and trends. data partition. A VSAM data set that is contained within a partitioned table space. data-partitioned secondary index (DPSI). A secondary index that is partitioned. The index is partitioned according to the underlying data. data sharing. The ability of two or more DB2 subsystems to directly access and change a single set of data. data sharing group. A collection of one or more DB2 subsystems that directly access and change the same data while maintaining data integrity. data sharing member. A DB2 subsystem that is assigned by XCF services to a data sharing group. data source. A local or remote relational or non-relational data manager that is capable of supporting data access via an ODBC driver that supports the ODBC APIs. In the case of DB2 UDB for z/OS, the data sources are always relational database managers.
| | | | | |
data space. In releases prior to DB2 UDB for z/OS, Version 8, a range of up to 2 GB of contiguous virtual storage addresses that a program can directly manipulate. Unlike an address space, a data space can hold only data; it does not contain common areas, system data, or programs. data type. An attribute of columns, literals, host variables, special registers, and the results of functions and expressions. data warehouse. A system that provides critical business information to an organization. The data warehouse system cleanses the data for accuracy and currency, and then presents the data to decision makers so that they can interpret and use it effectively and efficiently. date. A three-part value that designates a day, month, and year. date duration. A decimal integer that represents a number of years, months, and days. datetime value. A value of the data type DATE, TIME, or TIMESTAMP. DBA. Database administrator.
1448
Administration Guide
DBCLOB. Double-byte character large object. DBCS. Double-byte character set. DBD. Database descriptor. DBID. Database identifier. DBMS. Database management system. DBRM. Database request module. DB2 catalog. Tables that are maintained by DB2 and contain descriptions of DB2 objects, such as tables, views, and indexes. DB2 command. An instruction to the DB2 subsystem that a user enters to start or stop DB2, to display information on current users, to start or stop databases, to display information on the status of databases, and so on. DB2 for VSE & VM. The IBM DB2 relational database management system for the VSE and VM operating systems. DB2I. DB2 Interactive. DB2 Interactive (DB2I). The DB2 facility that provides for the execution of SQL statements, DB2 (operator) commands, programmer commands, and utility invocation. DB2I Kanji Feature. The tape that contains the panels and jobs that allow a site to display DB2I panels in Kanji. DB2 PM. DB2 Performance Monitor. DB2 thread. The DB2 structure that describes an applications connection, traces its progress, processes resource functions, and delimits its accessibility to DB2 resources and services. DCLGEN. Declarations generator. DDF. Distributed data facility. ddname. Data definition name. deadlock. Unresolvable contention for the use of a resource, such as a table or an index. declarations generator (DCLGEN). A subcomponent of DB2 that generates SQL table declarations and COBOL, C, or PL/I data structure declarations that conform to the table. The declarations are generated from DB2 system catalog information. DCLGEN is also a DSN subcommand. declared temporary table. A table that holds temporary data and is defined with the SQL statement DECLARE GLOBAL TEMPORARY TABLE. Information about declared temporary tables is not stored in the DB2 catalog, so this kind of table is not persistent and
can be used only by the application process that issued the DECLARE statement. Contrast with created temporary table. See also temporary table. default value. A predetermined value, attribute, or option that is assumed when no other is explicitly specified. deferred embedded SQL. SQL statements that are neither fully static nor fully dynamic. Like static statements, they are embedded within an application, but like dynamic statements, they are prepared during the execution of the application. deferred write. The process of asynchronously writing changed data pages to disk. degree of parallelism. The number of concurrently executed operations that are initiated to process a query. delete-connected. A table that is a dependent of table P or a dependent of a table to which delete operations from table P cascade. delete hole. The location on which a cursor is positioned when a row in a result table is refetched and the row no longer exists on the base table, because another cursor deleted the row between the time the cursor first included the row in the result table and the time the cursor tried to refetch it. delete rule. The rule that tells DB2 what to do to a dependent row when a parent row is deleted. For each relationship, the rule might be CASCADE, RESTRICT, SET NULL, or NO ACTION. delete trigger. A trigger that is defined with the triggering SQL operation DELETE. delimited identifier. A sequence of characters that are enclosed within double quotation marks ("). The sequence must consist of a letter followed by zero or more characters, each of which is a letter, digit, or the underscore character (_). delimiter token. A string constant, a delimited identifier, an operator symbol, or any of the special characters that are shown in DB2 syntax diagrams. denormalization. A key step in the task of building a physical relational database design. Denormalization is the intentional duplication of columns in multiple tables, and the consequence is increased data redundancy. Denormalization is sometimes necessary to minimize performance problems. Contrast with normalization. dependent. An object (row, table, or table space) that has at least one parent. The object is also said to be a dependent (row, table, or table space) of its parent. See also parent row, parent table, parent table space.
Glossary
1449
dependent row. A row that contains a foreign key that matches the value of a primary key in the parent row. dependent table. A table that is a dependent in at least one referential constraint. DES-based authenticator. An authenticator that is generated using the DES algorithm. descendent. An object that is a dependent of an object or is the dependent of a descendent of an object. descendent row. A row that is dependent on another row, or a row that is a descendent of a dependent row. descendent table. A table that is a dependent of another table, or a table that is a descendent of a dependent table. deterministic function. A user-defined function whose result is dependent on the values of the input arguments. That is, successive invocations with the same input values produce the same answer. Sometimes referred to as a not-variant function. Contrast this with an nondeterministic function (sometimes called a variant function), which might not always produce the same result for the same inputs. DFP. Data Facility Product (in z/OS). DFSMS. Data Facility Storage Management Subsystem (in z/OS). Also called Storage Management Subsystem (SMS).
distributed data. Data that resides on a DBMS other than the local system. distributed data facility (DDF). A set of DB2 components through which DB2 communicates with another relational database management system. Distributed Relational Database Architecture (DRDA ). A connection protocol for distributed relational database processing that is used by IBMs relational database products. DRDA includes protocols for communication between an application and a remote relational database management system, and for communication between relational database management systems. See also DRDA access. DL/I. Data Language/I. DNS. Domain name server.
| | | | |
document access definition (DAD). Used to define the indexing scheme for an XML column or the mapping scheme of an XML collection. It can be used to enable an XML Extender column of an XML collection, which is XML formatted. domain. The set of valid values for an attribute. domain name. The name by which TCP/IP applications refer to a TCP/IP host within a TCP/IP network. domain name server (DNS). A special TCP/IP network server that manages a distributed directory that is used to map TCP/IP host names to IP addresses. double-byte character large object (DBCLOB). A sequence of bytes representing double-byte characters where the size of the values can be up to 2 GB. In general, DBCLOB values are used whenever a double-byte character string might exceed the limits of the VARGRAPHIC type. double-byte character set (DBCS). A set of characters, which are used by national languages such as Japanese and Chinese, that have more symbols than can be represented by a single byte. Each character is 2 bytes in length. Contrast with single-byte character set and multibyte character set. double-precision floating point number. A 64-bit approximate representation of a real number. downstream. The set of nodes in the syncpoint tree that is connected to the local DBMS as a participant in the execution of a two-phase commit.
| DFSMSdss. The data set services (dss) component of | DFSMS (in z/OS). | DFSMShsm. The hierarchical storage manager (hsm) | component of DFSMS (in z/OS).
dimension. A data category such as time, products, or markets. The elements of a dimension are referred to as members. Dimensions offer a very concise, intuitive way of organizing and selecting data for retrieval, exploration, and analysis. See also dimension table. dimension table. The representation of a dimension in a star schema. Each row in a dimension table represents all of the attributes for a particular member of the dimension. See also dimension, star schema, and star join. directory. The DB2 system database that contains internal objects such as database descriptors and skeleton cursor tables.
# distinct predicate. In SQL, a predicate that ensures # that two row values are not equal, and that both row # values are not null.
distinct type. A user-defined data type that is internally represented as an existing type (its source type), but is considered to be a separate and incompatible type for semantic purposes.
1450
Administration Guide
DRDA. Distributed Relational Database Architecture. DRDA access. An open method of accessing distributed data that you can use to can connect to another database server to execute packages that were previously bound at the server location. You use the SQL CONNECT statement or an SQL statement with a three-part name to identify the server. Contrast with private protocol access. DSN. (1) The default DB2 subsystem name. (2) The name of the TSO command processor of DB2. (3) The first three characters of DB2 module and macro names. duration. A number that represents an interval of time. See also date duration, labeled duration, and time duration.
enclave. In Language Environment , an independent collection of routines, one of which is designated as the main routine. An enclave is similar to a program or run unit. encoding scheme. A set of rules to represent character data (ASCII, EBCDIC, or Unicode). entity. A significant object of interest to an organization. enumerated list. A set of DB2 objects that are defined with a LISTDEF utility control statement in which pattern-matching characters (*, %, _ or ?) are not used. environment. A collection of names of logical and physical resources that are used to support the performance of a function. environment handle. In DB2 ODBC, the data object that contains global information regarding the state of the application. An environment handle must be allocated before a connection handle can be allocated. Only one environment handle can be allocated per application. EOM. End of memory. EOT. End of task. equijoin. A join operation in which the join-condition has the form expression = expression. error page range. A range of pages that are considered to be physically damaged. DB2 does not allow users to access any pages that fall within this range. escape character. The symbol that is used to enclose an SQL delimited identifier. The escape character is the double quotation mark ("), except in COBOL applications, where the user assigns the symbol, which is either a double quotation mark or an apostrophe ('). ESDS. Entry sequenced data set. ESMT. External subsystem module table (in IMS). EUR. IBM European Standards.
| | | |
dynamic cursor. A named control structure that an application program uses to change the size of the result table and the order of its rows after the cursor is opened. Contrast with static cursor. dynamic dump. A dump that is issued during the execution of a program, usually under the control of that program. dynamic SQL. SQL statements that are prepared and executed within an application program while the program is executing. In dynamic SQL, the SQL source is contained in host language variables rather than being coded into the application program. The SQL statement can change several times during the application programs execution.
| |
dynamic statement cache pool. A cache, located above the 2-GB storage line, that holds dynamic statements.
E
EA-enabled table space. A table space or index space that is enabled for extended addressability and that contains individual partitions (or pieces, for LOB table spaces) that are greater than 4 GB.
EB. See exabyte. EBCDIC. Extended binary coded decimal interchange code. An encoding scheme that is used to represent character data in the z/OS, VM, VSE, and iSeries environments. Contrast with ASCII and Unicode. e-business. The transformation of key business processes through the use of Internet technologies.
| exabyte. For processor, real and virtual storage | capacities and channel volume: | 1 152 921 504 606 846 976 bytes or 260.
exception table. A table that holds rows that violate referential constraints or check constraints that the CHECK DATA utility finds. exclusive lock. A lock that prevents concurrently executing application processes from reading or changing data. Contrast with share lock. executable statement. An SQL statement that can be embedded in an application program, dynamically prepared and executed, or issued interactively.
Glossary
| | |
EDM pool. A pool of main storage that is used for database descriptors, application plans, authorization cache, application packages. EID. Event identifier. embedded SQL. SQL statements that are coded within an application program. See static SQL.
1451
execution context. In SQLJ, a Java object that can be used to control the execution of SQL statements. exit routine. A user-written (or IBM-provided default) program that receives control from DB2 to perform specific functions. Exit routines run as extensions of DB2. expanding conversion. A process that occurs when the length of a converted string is greater than that of the source string. For example, this process occurs when an ASCII mixed-data string that contains DBCS characters is converted to an EBCDIC mixed-data string; the converted string is longer because of the addition of shift codes. explicit hierarchical locking. Locking that is used to make the parent-child relationship between resources known to IRLM. This kind of locking avoids global locking overhead when no inter-DB2 interest exists on a resource. exposed name. A correlation name or a table or view name for which a correlation name is not specified. Names that are specified in a FROM clause are exposed or non-exposed. expression. An operand or a collection of operators and operands that yields a single value. extended recovery facility (XRF). A facility that minimizes the effect of failures in z/OS, VTAM , the host processor, or high-availability applications during sessions between high-availability applications and designated terminals. This facility provides an alternative subsystem to take over sessions from the failing subsystem. Extensible Markup Language (XML). A standard metalanguage for defining markup languages that is a subset of Standardized General Markup Language (SGML). The less complex nature of XML makes it easier to write applications that handle document types, to author and manage structured information, and to transmit and share structured information across diverse computing environments. external function. A function for which the body is written in a programming language that takes scalar argument values and produces a scalar result for each invocation. Contrast with sourced function, built-in function, and SQL function. external procedure. A user-written application program that can be invoked with the SQL CALL statement, which is written in a programming language. Contrast with SQL procedure. external routine. A user-defined function or stored procedure that is based on code that is written in an external programming language.
external subsystem module table (ESMT). In IMS, the table that specifies which attachment modules must be loaded.
F
failed member state. A state of a member of a data sharing group. When a member fails, the XCF permanently records the failed member state. This state usually means that the members task, address space, or z/OS system terminated before the state changed from active to quiesced. fallback. The process of returning to a previous release of DB2 after attempting or completing migration to a current release. false global lock contention. A contention indication from the coupling facility when multiple lock names are hashed to the same indicator and when no real contention exists. fan set. A direct physical access path to data, which is provided by an index, hash, or link; a fan set is the means by which the data manager supports the ordering of data. federated database. The combination of a DB2 Universal Database server (in Linux, UNIX, and Windows environments) and multiple data sources to which the server sends queries. In a federated database system, a client application can use a single SQL statement to join data that is distributed across multiple database management systems and can view the data as if it were local. fetch orientation. The specification of the desired placement of the cursor as part of a FETCH statement (for example, BEFORE, AFTER, NEXT, PRIOR, CURRENT, FIRST, LAST, ABSOLUTE, and RELATIVE). field procedure. A user-written exit routine that is designed to receive a single value and transform (encode or decode) it in any way the user can specify. filter factor. A number between zero and one that estimates the proportion of rows in a table for which a predicate is true. fixed-length string. A character or graphic string whose length is specified and cannot be changed. Contrast with varying-length string. FlashCopy. A function on the IBM Enterprise Storage Server that can create a point-in-time copy of data while an application is running. foreign key. A column or set of columns in a dependent table of a constraint relationship. The key must have the same number of columns, with the same descriptions, as the primary key of the parent table.
1452
Administration Guide
Each foreign key value must either match a parent key value in the related parent table or be null.
forest. An ordered set of subtrees of XML nodes. forget. In a two-phase commit operation, (1) the vote that is sent to the prepare phase when the participant has not modified any data. The forget vote allows a participant to release locks and forget about the logical unit of work. This is also referred to as the read-only vote. (2) The response to the committed request in the second phase of the operation. forward log recovery. The third phase of restart processing during which DB2 processes the log in a forward direction to apply all REDO log records. free space. The total amount of unused space in a page; that is, the space that is not used to store records or control information is free space. full outer join. The result of a join operation that includes the matched rows of both tables that are being joined and preserves the unmatched rows of both tables. See also join. fullselect. A subselect, a values-clause, or a number of both that are combined by set operators. Fullselect specifies a result table. If UNION is not used, the result of the fullselect is the result of the specified subselect.
list of the applicable schema names (called the SQL path) to make the selection. This process is sometimes called function selection. function selection. See function resolution. function signature. The logical concatenation of a fully qualified function name with the data types of all of its parameters.
G
GB. Gigabyte (1 073 741 824 bytes). GBP. Group buffer pool. GBP-dependent. The status of a page set or page set partition that is dependent on the group buffer pool. Either read/write interest is active among DB2 subsystems for this page set, or the page set has changed pages in the group buffer pool that have not yet been cast out to disk. generalized trace facility (GTF). A z/OS service program that records significant system events such as I/O interrupts, SVC interrupts, program interrupts, or external interrupts. generic resource name. A name that VTAM uses to represent several application programs that provide the same function in order to handle session distribution and balancing in a Sysplex environment. getpage. An operation in which DB2 accesses a data page. global lock. A lock that provides concurrency control within and among DB2 subsystems. The scope of the lock is across all DB2 subsystems of a data sharing group. global lock contention. Conflicts on locking requests between different DB2 members of a data sharing group when those members are trying to serialize shared resources. governor. See resource limit facility. graphic string. A sequence of DBCS characters. gross lock. The shared, update, or exclusive mode locks on a table, partition, or table space. group buffer pool (GBP). A coupling facility cache structure that is used by a data sharing group to cache data and to ensure that the data is consistent for all members. group buffer pool duplexing. The ability to write data to two instances of a group buffer pool structure: a primary group buffer pool and a secondary group buffer
| fully escaped mapping. A mapping from an SQL | identifier to an XML name when the SQL identifier is a | column name. # # # # # # # # #
function. A mapping, which is embodied as a program (the function body) that is invocable by means of zero or more input values (arguments) to a single value (the result). See also aggregate function and scalar function. Functions can be user-defined, built-in, or generated by DB2. (See also built-in function, cast function, external function, sourced function, SQL function, and user-defined function.) function definer. The authorization ID of the owner of the schema of the function that is specified in the CREATE FUNCTION statement. function implementer. The authorization ID of the owner of the function program and function package. function package. A package that results from binding the DBRM for a function program. function package owner. The authorization ID of the user who binds the function programs DBRM into a function package. function resolution. The process, internal to the DBMS, by which a function invocation is bound to a particular function instance. This process uses the function name, the data types of the arguments, and a
Glossary
1453
pool. z/OS publications refer to these instances as the "old" (for primary) and "new" (for secondary) structures. group level. The release level of a data sharing group, which is established when the first member migrates to a new release. group name. The z/OS XCF identifier for a data sharing group. group restart. A restart of at least one member of a data sharing group after the loss of either locks or the shared communications area. GTF. Generalized trace facility.
host structure. In an application program, a structure that is referenced by embedded SQL statements. host variable. In an application program, an application variable that is referenced by embedded SQL statements.
| | | |
host variable array. An array of elements, each of which corresponds to a value for a column. The dimension of the array determines the maximum number of rows for which the array can be used. HSM. Hierarchical storage manager. HTML. Hypertext Markup Language, a standard method for presenting Web data to users. HTTP. Hypertext Transfer Protocol, a communication protocol that the Web uses.
H
handle. In DB2 ODBC, a variable that refers to a data structure and associated resources. See also statement handle, connection handle, and environment handle. help panel. A screen of information that presents tutorial text to assist a user at the workstation or terminal. heuristic damage. The inconsistency in data between one or more participants that results when a heuristic decision to resolve an indoubt LUW at one or more participants differs from the decision that is recorded at the coordinator. heuristic decision. A decision that forces indoubt resolution at a participant by means other than automatic resynchronization between coordinator and participant.
I
ICF. Integrated catalog facility. IDCAMS. An IBM program that is used to process access method services commands. It can be invoked as a job or jobstep, from a TSO terminal, or from within a users application program. IDCAMS LISTCAT. A facility for obtaining information that is contained in the access method services catalog. identify. A request that an attachment service program in an address space that is separate from DB2 issues thorough the z/OS subsystem interface to inform DB2 of its existence and to initiate the process of becoming connected to DB2. identity column. A column that provides a way for DB2 to automatically generate a numeric value for each row. The generated values are unique if cycling is not used. Identity columns are defined with the AS IDENTITY clause. Uniqueness of values can be ensured by defining a unique index that contains only the identity column. A table can have no more than one identity column. IFCID. Instrumentation facility component identifier. IFI. Instrumentation facility interface. IFI call. An invocation of the instrumentation facility interface (IFI) by means of one of its defined functions. IFP. IMS Fast Path. image copy. An exact reproduction of all or part of a table space. DB2 provides utility programs to make full image copies (to copy the entire table space) or incremental image copies (to copy only those pages that have been modified since the last image copy).
| | | |
hole. A row of the result table that cannot be accessed because of a delete or an update that has been performed on the row. See also delete hole and update hole. home address space. The area of storage that z/OS currently recognizes as dispatched. host. The set of programs and resources that are available on a given TCP/IP instance. host expression. A Java variable or expression that is referenced by SQL clauses in an SQLJ application program. host identifier. A name that is declared in the host program. host language. A programming language in which you can embed SQL statements. host program. An application program that is written in a host language and that contains embedded SQL statements.
1454
Administration Guide
implied forget. In the presumed-abort protocol, an implied response of forget to the second-phase committed request from the coordinator. The response is implied when the participant responds to any subsequent request from the coordinator. IMS. Information Management System. IMS attachment facility. A DB2 subcomponent that uses z/OS subsystem interface (SSI) protocols and cross-memory linkage to process requests from IMS to DB2 and to coordinate resource commitment. IMS DB. Information Management System Database. IMS TM. Information Management System Transaction Manager. in-abort. A status of a unit of recovery. If DB2 fails after a unit of recovery begins to be rolled back, but before the process is completed, DB2 continues to back out the changes during restart. in-commit. A status of a unit of recovery. If DB2 fails after beginning its phase 2 commit processing, it "knows," when restarted, that changes made to data are consistent. Such units of recovery are termed in-commit. independent. An object (row, table, or table space) that is neither a parent nor a dependent of another object. index. A set of pointers that are logically ordered by the values of a key. Indexes can provide faster access to data and can enforce uniqueness on the rows in a table.
to be committed or rolled back. At emergency restart, if DB2 lacks the information it needs to make this decision, the status of the unit of recovery is indoubt until DB2 obtains this information from the coordinator. More than one unit of recovery can be indoubt at restart. indoubt resolution. The process of resolving the status of an indoubt logical unit of work to either the committed or the rollback state. inflight. A status of a unit of recovery. If DB2 fails before its unit of recovery completes phase 1 of the commit process, it merely backs out the updates of its unit of recovery at restart. These units of recovery are termed inflight. inheritance. The passing downstream of class resources or attributes from a parent class in the class hierarchy to a child class. initialization file. For DB2 ODBC applications, a file containing values that can be set to adjust the performance of the database manager. inline copy. A copy that is produced by the LOAD or REORG utility. The data set that the inline copy produces is logically equivalent to a full image copy that is produced by running the COPY utility with read-only access (SHRLEVEL REFERENCE). inner join. The result of a join operation that includes only the matched rows of both tables that are being joined. See also join. inoperative package. A package that cannot be used because one or more user-defined functions or procedures that the package depends on were dropped. Such a package must be explicitly rebound. Contrast with invalid package.
| | | | | |
index-controlled partitioning. A type of partitioning in which partition boundaries for a partitioned table are controlled by values that are specified on the CREATE INDEX statement. Partition limits are saved in the LIMITKEY column of the SYSIBM.SYSINDEXPART catalog table. index key. The set of columns in a table that is used to determine the order of index entries. index partition. A VSAM data set that is contained within a partitioning index space. index space. A page set that is used to store the entries of one index. indicator column. A 4-byte value that is stored in a base table in place of a LOB column. indicator variable. A variable that is used to represent the null value in an application program. If the value for the selected column is null, a negative value is placed in the indicator variable. indoubt. A status of a unit of recovery. If DB2 fails after it has finished its phase 1 commit processing and before it has started phase 2, only the commit coordinator knows if an individual unit of recovery is
| | | |
insensitive cursor. A cursor that is not sensitive to inserts, updates, or deletes that are made to the underlying rows of a result table after the result table has been materialized. insert trigger. A trigger that is defined with the triggering SQL operation INSERT. install. The process of preparing a DB2 subsystem to operate as a z/OS subsystem. installation verification scenario. A sequence of operations that exercises the main DB2 functions and tests whether DB2 was correctly installed. instrumentation facility component identifier (IFCID). A value that names and identifies a trace record of an event that can be traced. As a parameter on the START TRACE and MODIFY TRACE commands, it specifies that the corresponding event is to be traced.
Glossary
1455
instrumentation facility interface (IFI). A programming interface that enables programs to obtain online trace data about DB2, to submit DB2 commands, and to pass data to DB2. Interactive System Productivity Facility (ISPF). An IBM licensed program that provides interactive dialog services in a z/OS environment. inter-DB2 R/W interest. A property of data in a table space, index, or partition that has been opened by more than one member of a data sharing group and that has been opened for writing by at least one of those members. intermediate database server. The target of a request from a local application or a remote application requester that is forwarded to another database server. In the DB2 environment, the remote request is forwarded transparently to another database server if the object that is referenced by a three-part name does not reference the local location. internationalization. The support for an encoding scheme that is able to represent the code points of characters from many different geographies and languages. To support all geographies, the Unicode standard requires more than 1 byte to represent a single character. See also Unicode. internal resource lock manager (IRLM). A z/OS subsystem that DB2 uses to control communication and database locking.
ISPF. Interactive System Productivity Facility. ISPF/PDF. Interactive System Productivity Facility/Program Development Facility. iterator. In SQLJ, an object that contains the result set of a query. An iterator is equivalent to a cursor in other host languages. iterator declaration clause. In SQLJ, a statement that generates an iterator declaration class. An iterator is an object of an iterator declaration class.
J
| Japanese Industrial Standard. An encoding scheme | that is used to process Japanese characters. | JAR. Java Archive.
Java Archive (JAR). A file format that is used for aggregating many files into a single file. JCL. Job control language. JDBC. A Sun Microsystems database application programming interface (API) for Java that allows programs to access database management systems by using callable SQL. JDBC does not require the use of an SQL preprocessor. In addition, JDBC provides an architecture that lets users add modules called database drivers, which link the application to their choice of database management systems at run time. JES. Job Entry Subsystem. JIS. Japanese Industrial Standard. job control language (JCL). A control language that is used to identify a job to an operating system and to describe the jobs requirements. Job Entry Subsystem (JES). An IBM licensed program that receives jobs into the system and processes all output data that is produced by the jobs. join. A relational operation that allows retrieval of data from two or more tables based on matching column values. See also equijoin, full outer join, inner join, left outer join, outer join, and right outer join.
| | | | |
International Organization for Standardization. An international body charged with creating standards to facilitate the exchange of goods and services as well as cooperation in intellectual, scientific, technological, and economic activity. invalid package. A package that depends on an object (other than a user-defined function) that is dropped. Such a package is implicitly rebound on invocation. Contrast with inoperative package. invariant character set. (1) A character set, such as the syntactic character set, whose code point assignments do not change from code page to code page. (2) A minimum set of characters that is available as part of all character sets. IP address. A 4-byte value that uniquely identifies a TCP/IP host. IRLM. Internal resource lock manager. ISO. International Organization for Standardization. isolation level. The degree to which a unit of work is isolated from the updating operations of other units of work. See also cursor stability, read stability, repeatable read, and uncommitted read.
K
KB. Kilobyte (1024 bytes). Kerberos. A network authentication protocol that is designed to provide strong authentication for client/server applications by using secret-key cryptography. Kerberos ticket. A transparent application mechanism that transmits the identity of an initiating principal to its target. A simple ticket contains the principals
1456
Administration Guide
identity, a session key, a timestamp, and other information, which is sealed using the targets secret key. key. A column or an ordered collection of columns that is identified in the description of a table, index, or referential constraint. The same column can be part of more than one key. key-sequenced data set (KSDS). A VSAM file or data set whose records are loaded in key sequence and controlled by an index. keyword. In SQL, a name that identifies an option that is used in an SQL statement. KSDS. Key-sequenced data set.
modules by resolving cross references among the modules and, if necessary, adjusting addresses. link-edit. The action of creating a loadable computer program using a linkage editor. list. A type of object, which DB2 utilities can process, that identifies multiple table spaces, multiple index spaces, or both. A list is defined with the LISTDEF utility control statement. list structure. A coupling facility structure that lets data be shared and manipulated as elements of a queue. LLE. Load list element. L-lock. Logical lock.
L
labeled duration. A number that represents a duration of years, months, days, hours, minutes, seconds, or microseconds. large object (LOB). A sequence of bytes representing bit data, single-byte characters, double-byte characters, or a mixture of single- and double-byte characters. A LOB can be up to 2 GB1 byte in length. See also BLOB, CLOB, and DBCLOB. last agent optimization. An optimized commit flow for either presumed-nothing or presumed-abort protocols in which the last agent, or final participant, becomes the commit coordinator. This flow saves at least one message. latch. A DB2 internal mechanism for controlling concurrent events or the use of system resources. LCID. Log control interval definition. LDS. Linear data set. leaf page. A page that contains pairs of keys and RIDs and that points to actual data. Contrast with nonleaf page. left outer join. The result of a join operation that includes the matched rows of both tables that are being joined, and that preserves the unmatched rows of the first table. See also join. limit key. The highest value of the index key for a partition. linear data set (LDS). A VSAM data set that contains data but no control information. A linear data set can be accessed as a byte-addressable string in virtual storage. linkage editor. A computer program for creating load modules from one or more object modules or load
| | |
load list element. A z/OS control block that controls the loading and deleting of a particular load module based on entry point names. load module. A program unit that is suitable for loading into main storage for execution. The output of a linkage editor. LOB. Large object. LOB locator. A mechanism that allows an application program to manipulate a large object value in the database system. A LOB locator is a fullword integer value that represents a single LOB value. An application program retrieves a LOB locator into a host variable and can then apply SQL operations to the associated LOB value using the locator. LOB lock. A lock on a LOB value. LOB table space. A table space in an auxiliary table that contains all the data for a particular LOB column in the related base table. local. A way of referring to any object that the local DB2 subsystem maintains. A local table, for example, is a table that is maintained by the local DB2 subsystem. Contrast with remote. locale. The definition of a subset of a users environment that combines a CCSID and characters that are defined for a specific language and country. local lock. A lock that provides intra-DB2 concurrency control, but not inter-DB2 concurrency control; that is, its scope is a single DB2. local subsystem. The unique relational DBMS to which the user or application program is directly connected (in the case of DB2, by one of the DB2 attachment facilities).
| |
location. The unique name of a database server. An application uses the location name to access a DB2
Glossary
1457
| database server. A database alias can be used to | override the location name when accessing a remote | server. | location alias. Another name by which a database | server identifies itself in the network. Applications can | use this name to access a DB2 database server.
lock. A means of controlling concurrent events or access to data. DB2 locking is performed by the IRLM. lock duration. The interval over which a DB2 lock is held. lock escalation. The promotion of a lock from a row, page, or LOB lock to a table space lock because the number of page locks that are concurrently held on a given resource exceeds a preset limit. locking. The process by which the integrity of data is ensured. Locking prevents concurrent users from accessing inconsistent data. lock mode. A representation for the type of access that concurrently running programs can have to a resource that a DB2 lock is holding. lock object. The resource that is controlled by a DB2 lock. lock promotion. The process of changing the size or mode of a DB2 lock to a higher, more restrictive level. lock size. The amount of data that is controlled by a DB2 lock on table data; the value can be a row, a page, a LOB, a partition, a table, or a table space. lock structure. A coupling facility data structure that is composed of a series of lock entries to support shared and exclusive locking for logical resources. log. A collection of records that describe the events that occur during DB2 execution and that indicate their sequence. The information thus recorded is used for recovery in the event of a failure during DB2 execution.
logical lock (L-lock). The lock type that transactions use to control intra- and inter-DB2 data concurrency between transactions. Contrast with physical lock (P-lock). logically complete. A state in which the concurrent copy process is finished with the initialization of the target objects that are being copied. The target objects are available for update. logical page list (LPL). A list of pages that are in error and that cannot be referenced by applications until the pages are recovered. The page is in logical error because the actual media (coupling facility or disk) might not contain any errors. Usually a connection to the media has been lost. logical partition. A set of key or RID pairs in a nonpartitioning index that are associated with a particular partition. logical recovery pending (LRECP). The state in which the data and the index keys that reference the data are inconsistent. logical unit (LU). An access point through which an application program accesses the SNA network in order to communicate with another application program. logical unit of work (LUW). The processing that a program performs between synchronization points. logical unit of work identifier (LUWID). A name that uniquely identifies a thread within a network. This name consists of a fully-qualified LU network name, an LUW instance number, and an LUW sequence number. log initialization. The first phase of restart processing during which DB2 attempts to locate the current end of the log. log record header (LRH). A prefix, in every logical record, that contains control information. log record sequence number (LRSN). A unique identifier for a log record that is associated with a data sharing member. DB2 uses the LRSN for recovery in the data sharing environment. log truncation. A process by which an explicit starting RBA is established. This RBA is the point at which the next byte of log data is to be written. LPL. Logical page list. LRECP. Logical recovery pending. LRH. Log record header. LRSN. Log record sequence number. LU. Logical unit.
| log control interval definition. A suffix of the | physical log record that tells how record segments are | placed in the physical control interval.
logical claim. A claim on a logical partition of a nonpartitioning index. logical data modeling. The process of documenting the comprehensive business information requirements in an accurate and consistent format. Data modeling is the first task of designing a database. logical drain. A drain on a logical partition of a nonpartitioning index. logical index partition. The set of all keys that reference the same data partition.
1458
Administration Guide
LU name. Logical unit name, which is the name by which VTAM refers to a node in a network. Contrast with location name. LUW. Logical unit of work. LUWID. Logical unit of work identifier.
MODEENT. A VTAM macro instruction that associates a logon mode name with a set of parameters representing session protocols. A set of MODEENT macro instructions defines a logon mode table. modeling database. A DB2 database that you create on your workstation that you use to model a DB2 UDB for z/OS subsystem, which can then be evaluated by the Index Advisor. mode name. A VTAM name for the collection of physical and logical characteristics and attributes of a session. modify locks. An L-lock or P-lock with a MODIFY attribute. A list of these active locks is kept at all times in the coupling facility lock structure. If the requesting DB2 subsystem fails, that DB2 subsystems modify locks are converted to retained locks. MPP. Message processing program (in IMS). MTO. Master terminal operator. multibyte character set (MBCS). A character set that represents single characters with more than a single byte. Contrast with single-byte character set and double-byte character set. See also Unicode. multidimensional analysis. The process of assessing and evaluating an enterprise on more than one level. Multiple Virtual Storage. An element of the z/OS operating system. This element is also called the Base Control Program (BCP). multisite update. Distributed relational database processing in which data is updated in more than one location within a single unit of work. multithreading. Multiple TCBs that are executing one copy of DB2 ODBC code concurrently (sharing a processor) or in parallel (on separate central processors). must-complete. A state during DB2 processing in which the entire operation must be completed to maintain data integrity. mutex. Pthread mutual exclusion; a lock. A Pthread mutex variable is used as a locking mechanism to allow serialization of critical sections of code by temporarily blocking the execution of all but one thread.
M
mapping table. A table that the REORG utility uses to map the associations of the RIDs of data records in the original copy and in the shadow copy. This table is created by the user. mass delete. The deletion of all rows of a table. master terminal. The IMS logical terminal that has complete control of IMS resources during online operations. master terminal operator (MTO). See master terminal. materialize. (1) The process of putting rows from a view or nested table expression into a work file for additional processing by a query. (2) The placement of a LOB value into contiguous storage. Because LOB values can be very large, DB2 avoids materializing LOB data until doing so becomes absolutely necessary.
| materialized query table. A table that is used to | contain information that is derived and can be | summarized from one or more source tables.
MB. Megabyte (1 048 576 bytes). MBCS. Multibyte character set. UTF-8 is an example of an MBCS. Characters in UTF-8 can range from 1 to 4 bytes in DB2. member name. The z/OS XCF identifier for a particular DB2 subsystem in a data sharing group. menu. A displayed list of available functions for selection by the operator. A menu is sometimes called a menu panel.
N
negotiable lock. A lock whose mode can be downgraded, by agreement among contending users, to be compatible to all. A physical lock is an example of a negotiable lock.
Glossary
1459
nested table expression. A fullselect in a FROM clause (surrounded by parentheses). network identifier (NID). The network ID that is assigned by IMS or CICS, or if the connection type is RRSAF, the RRS unit of recovery ID (URID). NID. Network identifier. nonleaf page. A page that contains keys and page numbers of other pages in the index (either leaf or nonleaf pages). Nonleaf pages never point to actual data.
For Unicode UCS-2 (wide) strings, the null terminator is a double-byte value (X'0000').
O
OASN (origin application schedule number). In IMS, a 4-byte number that is assigned sequentially to each IMS schedule since the last cold start of IMS. The OASN is used as an identifier for a unit of work. In an 8-byte format, the first 4 bytes contain the schedule number and the last 4 bytes contain the number of IMS sync points (commit points) during the current schedule. The OASN is part of the NID for an IMS connection. ODBC. Open Database Connectivity. ODBC driver. A dynamically-linked library (DLL) that implements ODBC function calls and interacts with a data source. OBID. Data object identifier.
| nonpartitioned index. An index that is not physically | partitioned. Both partitioning indexes and secondary | indexes can be nonpartitioned.
nonscrollable cursor. A cursor that can be moved only in a forward direction. Nonscrollable cursors are sometimes called forward-only cursors or serial cursors. normalization. A key step in the task of building a logical relational database design. Normalization helps you avoid redundancies and inconsistencies in your data. An entity is normalized if it meets a set of constraints for a particular normal form (first normal form, second normal form, and so on). Contrast with denormalization. nondeterministic function. A user-defined function whose result is not solely dependent on the values of the input arguments. That is, successive invocations with the same argument values can produce a different answer. this type of function is sometimes called a variant function. Contrast this with a deterministic function (sometimes called a not-variant function), which always produces the same result for the same inputs. not-variant function. See deterministic function.
Open Database Connectivity (ODBC). A Microsoft database application programming interface (API) for C that allows access to database management systems by using callable SQL. ODBC does not require the use of an SQL preprocessor. In addition, ODBC provides an architecture that lets users add modules called database drivers, which link the application to their choice of database management systems at run time. This means that applications no longer need to be directly linked to the modules of all the database management systems that are supported. ordinary identifier. An uppercase letter followed by zero or more characters, each of which is an uppercase letter, a digit, or the underscore character. An ordinary identifier must not be a reserved word. ordinary token. A numeric constant, an ordinary identifier, a host identifier, or a keyword. originating task. In a parallel group, the primary agent that receives data from other execution units (referred to as parallel tasks) that are executing portions of the query in parallel. OS/390. Operating System/390. outer join. The result of a join operation that includes the matched rows of both tables that are being joined and preserves some or all of the unmatched rows of the tables that are being joined. See also join. overloaded function. A function name for which multiple function instances exist.
1460
Administration Guide
P
package. An object containing a set of SQL statements that have been statically bound and that is available for processing. A package is sometimes also called an application package. package list. An ordered list of package names that may be used to extend an application plan. package name. The name of an object that is created by a BIND PACKAGE or REBIND PACKAGE command. The object is a bound version of a database request module (DBRM). The name consists of a location name, a collection ID, a package ID, and a version ID. page. A unit of storage within a table space (4 KB, 8 KB, 16 KB, or 32 KB) or index space (4 KB). In a table space, a page contains one or more rows of a table. In a LOB table space, a LOB value can span more than one page, but no more than one LOB value is stored on a page. page set. Another way to refer to a table space or index space. Each page set consists of a collection of VSAM data sets. page set recovery pending (PSRCP). A restrictive state of an index space. In this case, the entire page set must be recovered. Recovery of a logical part is prohibited. panel. A predefined display image that defines the locations and characteristics of display fields on a display surface (for example, a menu panel). parallel complex. A cluster of machines that work together to handle multiple transactions and applications. parallel group. A set of consecutive operations that execute in parallel and that have the same number of parallel tasks. parallel I/O processing. A form of I/O processing in which DB2 initiates multiple concurrent requests for a single user query and performs I/O processing concurrently (in parallel) on multiple data partitions. parallelism assistant. In Sysplex query parallelism, a DB2 subsystem that helps to process parts of a parallel query that originates on another DB2 subsystem in the data sharing group. parallelism coordinator. In Sysplex query parallelism, the DB2 subsystem from which the parallel query originates. Parallel Sysplex. A set of z/OS systems that communicate and cooperate with each other through certain multisystem hardware components and software services to process customer workloads.
parallel task. The execution unit that is dynamically created to process a query in parallel. A parallel task is implemented by a z/OS service request block. parameter marker. A question mark (?) that appears in a statement string of a dynamic SQL statement. The question mark can appear where a host variable could appear if the statement string were a static SQL statement.
| |
parameter-name. An SQL identifier that designates a parameter in an SQL procedure or an SQL function. parent key. A primary key or unique key in the parent table of a referential constraint. The values of a parent key determine the valid values of the foreign key in the referential constraint.
| | | | |
parent lock. For explicit hierarchical locking, a lock that is held on a resource that might have child locks that are lower in the hierarchy. A parent lock is usually the table space lock or the partition intent lock. See also child lock. parent row. A row whose primary key value is the foreign key value of a dependent row. parent table. A table whose primary key is referenced by the foreign key of a dependent table. parent table space. A table space that contains a parent table. A table space containing a dependent of that table is a dependent table space. participant. An entity other than the commit coordinator that takes part in the commit process. The term participant is synonymous with agent in SNA. partition. A portion of a page set. Each partition corresponds to a single, independently extendable data set. Partitions can be extended to a maximum size of 1, 2, or 4 GB, depending on the number of partitions in the partitioned page set. All partitions of a given page set have the same maximum size. partitioned data set (PDS). A data set in disk storage that is divided into partitions, which are called members. Each partition can contain a program, part of a program, or data. The term partitioned data set is synonymous with program library.
| | |
partitioned index. An index that is physically partitioned. Both partitioning indexes and secondary indexes can be partitioned. partitioned page set. A partitioned table space or an index space. Header pages, space map pages, data pages, and index pages reference data only within the scope of the partition. partitioned table space. A table space that is subdivided into parts (based on index key range), each of which can be processed independently by utilities.
Glossary
1461
| partitioning index. An index in which the leftmost | columns are the partitioning columns of the table. The | index can be partitioned or nonpartitioned. | | | |
partition pruning. The removal from consideration of inapplicable partitions through setting up predicates in a query on a partitioned table to access only certain partitions to satisfy the query. partner logical unit. An access point in the SNA network that is connected to the local DB2 subsystem by way of a VTAM conversation. path. See SQL path. PCT. Program control table (in CICS). PDS. Partitioned data set. piece. A data set of a nonpartitioned page set. physical claim. A claim on an entire nonpartitioning index. physical consistency. The state of a page that is not in a partially changed state. physical drain. A drain on an entire nonpartitioning index. physical lock (P-lock). A type of lock that DB2 acquires to provide consistency of data that is cached in different DB2 subsystems. Physical locks are used only in data sharing environments. Contrast with logical lock (L-lock). physical lock contention. Conflicting states of the requesters for a physical lock. See also negotiable lock. physically complete. The state in which the concurrent copy process is completed and the output data set has been created. plan. See application plan. plan allocation. The process of allocating DB2 resources to a plan in preparation for execution. plan member. The bound copy of a DBRM that is identified in the member clause. plan name. The name of an application plan. plan segmentation. The dividing of each plan into sections. When a section is needed, it is independently brought into the EDM pool. P-lock. Physical lock. PLT. Program list table (in CICS). point of consistency. A time when all recoverable data that an application accesses is consistent with other data. The term point of consistency is synonymous with sync point or commit point.
policy. See CFRM policy. Portable Operating System Interface (POSIX). The IEEE operating system interface standard, which defines the Pthread standard of threading. See also Pthread. POSIX. Portable Operating System Interface. postponed abort UR. A unit of recovery that was inflight or in-abort, was interrupted by system failure or cancellation, and did not complete backout during restart. PPT. (1) Processing program table (in CICS). (2) Program properties table (in z/OS). precision. In SQL, the total number of digits in a decimal number (called the size in the C language). In the C language, the number of digits to the right of the decimal point (called the scale in SQL). The DB2 library uses the SQL terms. precompilation. A processing of application programs containing SQL statements that takes place before compilation. SQL statements are replaced with statements that are recognized by the host language compiler. Output from this precompilation includes source code that can be submitted to the compiler and the database request module (DBRM) that is input to the bind process. predicate. An element of a search condition that expresses or implies a comparison operation. prefix. A code at the beginning of a message or record. preformat. The process of preparing a VSAM ESDS for DB2 use, by writing specific data patterns. prepare. The first phase of a two-phase commit process in which all participants are requested to prepare for commit. prepared SQL statement. A named object that is the executable form of an SQL statement that has been processed by the PREPARE statement. presumed-abort. An optimization of the presumed-nothing two-phase commit protocol that reduces the number of recovery log records, the duration of state maintenance, and the number of messages between coordinator and participant. The optimization also modifies the indoubt resolution responsibility. presumed-nothing. The standard two-phase commit protocol that defines coordinator and participant responsibilities, relative to logical unit of work states, recovery logging, and indoubt resolution. primary authorization ID. The authorization ID that is used to identify the application process to DB2.
1462
Administration Guide
primary group buffer pool. For a duplexed group buffer pool, the structure that is used to maintain the coherency of cached data. This structure is used for page registration and cross-invalidation. The z/OS equivalent is old structure. Compare with secondary group buffer pool. primary index. An index that enforces the uniqueness of a primary key. primary key. In a relational database, a unique, nonnull key that is part of the definition of a table. A table cannot be defined as a parent unless it has a unique key or primary key. principal. An entity that can communicate securely with another entity. In Kerberos, principals are represented as entries in the Kerberos registry database and include users, servers, computers, and others. principal name. The name by which a principal is known to the DCE security services. private connection. A communications connection that is specific to DB2. private protocol access. A method of accessing distributed data by which you can direct a query to another DB2 system. Contrast with DRDA access. private protocol connection. A DB2 private connection of the application process. See also private connection. privilege. The capability of performing a specific function, sometimes on a specific object. The types of privileges are: explicit privileges, which have names and are held as the result of SQL GRANT and REVOKE statements. For example, the SELECT privilege. implicit privileges, which accompany the ownership of an object, such as the privilege to drop a synonym that one owns, or the holding of an authority, such as the privilege of SYSADM authority to terminate any utility job. privilege set. For the installation SYSADM ID, the set of all possible privileges. For any other authorization ID, the set of all privileges that are recorded for that ID in the DB2 catalog. process. In DB2, the unit to which DB2 allocates resources and locks. Sometimes called an application process, a process involves the execution of one or more programs. The execution of an SQL statement is always associated with some process. The means of initiating and terminating a process are dependent on the environment. program. A single, compilable collection of executable statements in a programming language. program temporary fix (PTF). A solution or bypass of a problem that is diagnosed as a result of a defect in a
current unaltered release of a licensed program. An authorized program analysis report (APAR) fix is corrective service for an existing problem. A PTF is preventive service for problems that might be encountered by other users of the product. A PTF is temporary, because a permanent fix is usually not incorporated into the product until its next release. protected conversation. A VTAM conversation that supports two-phase commit flows. PSRCP. Page set recovery pending. PTF. Program temporary fix. Pthread. The POSIX threading standard model for splitting an application into subtasks. The Pthread standard includes functions for creating threads, terminating threads, synchronizing threads through locking, and other thread control facilities.
Q
QMF. Query Management Facility. QSAM. Queued sequential access method. query. A component of certain SQL statements that specifies a result table. query block. The part of a query that is represented by one of the FROM clauses. Each FROM clause can have multiple query blocks, depending on DB2s internal processing of the query. query CP parallelism. Parallel execution of a single query, which is accomplished by using multiple tasks. See also Sysplex query parallelism. query I/O parallelism. Parallel access of data, which is accomplished by triggering multiple I/O requests within a single query. queued sequential access method (QSAM). An extended version of the basic sequential access method (BSAM). When this method is used, a queue of data blocks is formed. Input data blocks await processing, and output data blocks await transfer to auxiliary storage or to an output device. quiesce point. A point at which data is consistent as a result of running the DB2 QUIESCE utility. quiesced member state. A state of a member of a data sharing group. An active member becomes quiesced when a STOP DB2 command takes effect without a failure. If the members task, address space, or z/OS system fails before the command takes effect, the member state is failed.
Glossary
1463
R
| RACF. Resource Access Control Facility, which is a | component of the z/OS Security Server.
RAMAC. IBM family of enterprise disk storage system products. RBA. Relative byte address. RCT. Resource control table (in CICS attachment facility). RDB. Relational database. RDBMS. Relational database management system. RDBNAM. Relational database name. RDF. Record definition field. read stability (RS). An isolation level that is similar to repeatable read but does not completely isolate an application process from all other concurrently executing application processes. Under level RS, an application that issues the same query more than once might read additional rows that were inserted and committed by a concurrently executing application process. rebind. The creation of a new application plan for an application program that has been bound previously. If, for example, you have added an index for a table that your application accesses, you must rebind the application in order to take advantage of that index. rebuild. The process of reallocating a coupling facility structure. For the shared communications area (SCA) and lock structure, the structure is repopulated; for the group buffer pool, changed pages are usually cast out to disk, and the new structure is populated only with changed pages that were not successfully cast out. RECFM. Record format. record. The storage representation of a row or other data. record identifier (RID). A unique identifier that DB2 uses internally to identify a row of data in a table. Compare with row ID.
columns, the record is a fixed-length record. If one or more columns are varying-length columns, the record is a varying-length column. Recoverable Resource Manager Services attachment facility (RRSAF). A DB2 subcomponent that uses Resource Recovery Services to coordinate resource commitment between DB2 and all other resource managers that also use RRS in a z/OS system. recovery. The process of rebuilding databases after a system failure. recovery log. A collection of records that describes the events that occur during DB2 execution and indicates their sequence. The recorded information is used for recovery in the event of a failure during DB2 execution. recovery manager. (1) A subcomponent that supplies coordination services that control the interaction of DB2 resource managers during commit, abort, checkpoint, and restart processes. The recovery manager also supports the recovery mechanisms of other subsystems (for example, IMS) by acting as a participant in the other subsystems process for protecting data that has reached a point of consistency. (2) A coordinator or a participant (or both), in the execution of a two-phase commit, that can access a recovery log that maintains the state of the logical unit of work and names the immediate upstream coordinator and downstream participants. recovery pending (RECP). A condition that prevents SQL access to a table space that needs to be recovered. recovery token. An identifier for an element that is used in recovery (for example, NID or URID). RECP. Recovery pending. redo. A state of a unit of recovery that indicates that changes are to be reapplied to the disk media to ensure data integrity. reentrant. Executable code that can reside in storage as one shared copy for all threads. Reentrant code is not self-modifying and provides separate storage areas for each thread. Reentrancy is a compiler and operating system concept, and reentrancy alone is not enough to guarantee logically consistent results when multithreading. See also threadsafe. referential constraint. The requirement that nonnull values of a designated foreign key are valid only if they equal values of the primary key of a designated table. referential integrity. The state of a database in which all values of all foreign keys are valid. Maintaining referential integrity requires the enforcement of referential constraints on all operations that change the data in a table on which the referential constraints are defined.
| record identifier (RID) pool. An area of main storage | that is used for sorting record identifiers during | list-prefetch processing.
record length. The sum of the length of all the columns in a table, which is the length of the data as it is physically stored in the database. Records can be fixed length or varying length, depending on how the columns are defined. If all columns are fixed-length
1464
Administration Guide
referential structure. A set of tables and relationships that includes at least one table and, for every table in the set, all the relationships in which that table participates and all the tables to which it is related.
reoptimization, DB2 uses the values of host variables, parameter markers, or special registers. REORG pending (REORP). A condition that restricts SQL access and most utility access to an object that must be reorganized. REORP. REORG pending. repeatable read (RR). The isolation level that provides maximum protection from other executing application programs. When an application program executes with repeatable read protection, rows that the program references cannot be changed by other programs until the program reaches a commit point. repeating group. A situation in which an entity includes multiple attributes that are inherently the same. The presence of a repeating group violates the requirement of first normal form. In an entity that satisfies the requirement of first normal form, each attribute is independent and unique in its meaning and its name. See also normalization. replay detection mechanism. A method that allows a principal to detect whether a request is a valid request from a source that can be trusted or whether an untrustworthy entity has captured information from a previous exchange and is replaying the information exchange to gain access to the principal. request commit. The vote that is submitted to the prepare phase if the participant has modified data and is prepared to commit or roll back. requester. The source of a request to access data at a remote server. In the DB2 environment, the requester function is provided by the distributed data facility. resource. The object of a lock or claim, which could be a table space, an index space, a data partition, an index partition, or a logical partition. resource allocation. The part of plan allocation that deals specifically with the database resources. resource control table (RCT). A construct of the CICS attachment facility, created by site-provided macro parameters, that defines authorization and access attributes for transactions or transaction groups. resource definition online. A CICS feature that you use to define CICS resources online without assembling tables. resource limit facility (RLF). A portion of DB2 code that prevents dynamic manipulative SQL statements from exceeding specified time limits. The resource limit facility is sometimes called the governor. resource limit specification table (RLST). A site-defined table that specifies the limits to be enforced by the resource limit facility.
Glossary
| | |
refresh age. The time duration between the current time and the time during which a materialized query table was last refreshed. registry. See registry database. registry database. A database of security information about principals, groups, organizations, accounts, and security policies. relational database (RDB). A database that can be perceived as a set of tables and manipulated in accordance with the relational model of data. relational database management system (RDBMS). A collection of hardware and software that organizes and provides access to a relational database. relational database name (RDBNAM). A unique identifier for an RDBMS within a network. In DB2, this must be the value in the LOCATION column of table SYSIBM.LOCATIONS in the CDB. DB2 publications refer to the name of another RDBMS as a LOCATION value or a location name. relationship. A defined connection between the rows of a table or the rows of two tables. A relationship is the internal representation of a referential constraint. relative byte address (RBA). The offset of a data record or control interval from the beginning of the storage space that is allocated to the data set or file to which it belongs. remigration. The process of returning to a current release of DB2 following a fallback to a previous release. This procedure constitutes another migration process. remote. Any object that is maintained by a remote DB2 subsystem (that is, by a DB2 subsystem other than the local one). A remote view, for example, is a view that is maintained by a remote DB2 subsystem. Contrast with local. remote attach request. A request by a remote location to attach to the local DB2 subsystem. Specifically, the request that is sent is an SNA Function Management Header 5. remote subsystem. Any relational DBMS, except the local subsystem, with which the user or application can communicate. The subsystem need not be remote in any physical sense, and might even operate on the same processor under the same z/OS system. reoptimization. The DB2 process of reconsidering the access path of an SQL statement at run time; during
1465
resource manager. (1) A function that is responsible for managing a particular resource and that guarantees the consistency of all updates made to recoverable resources within a logical unit of work. The resource that is being managed can be physical (for example, disk or main storage) or logical (for example, a particular type of system service). (2) A participant, in the execution of a two-phase commit, that has recoverable resources that could have been modified. The resource manager has access to a recovery log so that it can commit or roll back the effects of the logical unit of work to the recoverable resources. restart pending (RESTP). A restrictive state of a page set or partition that indicates that restart (backout) work needs to be performed on the object. All access to the page set or partition is denied except for access by the: v RECOVER POSTPONED command v Automatic online backout (which DB2 invokes after restart if the system parameter LBACKOUT=AUTO) RESTP. Restart pending. result set. The set of rows that a stored procedure returns to a client application. result set locator. A 4-byte value that DB2 uses to uniquely identify a query result set that a stored procedure returns. result table. The set of rows that are specified by a SELECT statement. retained lock. A MODIFY lock that a DB2 subsystem was holding at the time of a subsystem failure. The lock is retained in the coupling facility lock structure across a DB2 failure. RID. Record identifier. RID pool. Record identifier pool. right outer join. The result of a join operation that includes the matched rows of both tables that are being joined and preserves the unmatched rows of the second join operand. See also join. RLF. Resource limit facility. RLST. Resource limit specification table. RMID. Resource manager identifier. RO. Read-only access. rollback. The process of restoring data that was changed by SQL statements to the state at its last commit point. All locks are freed. Contrast with commit. root page. The index page that is at the highest level (or the beginning point) in an index.
routine. A term that refers to either a user-defined function or a stored procedure. row. The horizontal component of a table. A row consists of a sequence of values, one for each column of the table. ROWID. Row identifier. row identifier (ROWID). A value that uniquely identifies a row. This value is stored with the row and never changes. row lock. A lock on a single row of data.
| | | | | | | | | |
rowset. A set of rows for which a cursor position is established. rowset cursor. A cursor that is defined so that one or more rows can be returned as a rowset for a single FETCH statement, and the cursor is positioned on the set of rows that is fetched. rowset-positioned access. The ability to retrieve multiple rows from a single FETCH statement. row-positioned access. The ability to retrieve a single row from a single FETCH statement. row trigger. A trigger that is defined with the trigger granularity FOR EACH ROW. RRE. Residual recovery entry (in IMS). RRSAF. Recoverable Resource Manager Services attachment facility. RS. Read stability. RTT. Resource translation table. RURE. Restart URE.
S
savepoint. A named entity that represents the state of data and schemas at a particular point in time within a unit of work. SQL statements exist to set a savepoint, release a savepoint, and restore data and schemas to the state that the savepoint represents. The restoration of data and schemas to a savepoint is usually referred to as rolling back to a savepoint. SBCS. Single-byte character set. SCA. Shared communications area.
# # # # #
scalar function. An SQL operation that produces a single value from another value and is expressed as a function name, followed by a list of arguments that are enclosed in parentheses. Contrast with aggregate function.
1466
Administration Guide
scale. In SQL, the number of digits to the right of the decimal point (called the precision in the C language). The DB2 library uses the SQL definition.
self-referencing constraint. A referential constraint that defines a relationship in which a table is a dependent of itself. self-referencing table. A table with a self-referencing constraint.
| | | | | | | | |
schema. (1) The organization or structure of a database. (2) A logical grouping for user-defined functions, distinct types, triggers, and stored procedures. When an object of one of these types is created, it is assigned to one schema, which is determined by the name of the object. For example, the following statement creates a distinct type T in schema C: CREATE DISTINCT TYPE C.T ... scrollability. The ability to use a cursor to fetch in either a forward or backward direction. The FETCH statement supports multiple fetch orientations to indicate the new position of the cursor. See also fetch orientation. scrollable cursor. A cursor that can be moved in both a forward and a backward direction. SDWA. System diagnostic work area. search condition. A criterion for selecting rows from a table. A search condition consists of one or more predicates. secondary authorization ID. An authorization ID that has been associated with a primary authorization ID by an authorization exit routine. secondary group buffer pool. For a duplexed group buffer pool, the structure that is used to back up changed pages that are written to the primary group buffer pool. No page registration or cross-invalidation occurs using the secondary group buffer pool. The z/OS equivalent is new structure.
| sensitive cursor. A cursor that is sensitive to changes | that are made to the database after the result table has | been materialized. | sequence. A user-defined object that generates a | sequence of numeric values according to user | specifications.
sequential data set. A non-DB2 data set whose records are organized on the basis of their successive physical positions, such as on magnetic tape. Several of the DB2 database utilities require sequential data sets. sequential prefetch. A mechanism that triggers consecutive asynchronous I/O operations. Pages are fetched before they are required, and several pages are read with a single I/O operation. serial cursor. A cursor that can be moved only in a forward direction. serialized profile. A Java object that contains SQL statements and descriptions of host variables. The SQLJ translator produces a serialized profile for each connection context. server. The target of a request from a remote requester. In the DB2 environment, the server function is provided by the distributed data facility, which is used to access DB2 data from remote applications. server-side programming. A method for adding DB2 data into dynamic Web pages. service class. An eight-character identifier that is used by the z/OS Workload Manager to associate user performance goals with a particular DDF thread or stored procedure. A service class is also used to classify work on parallelism assistants. service request block. A unit of work that is scheduled to execute in another address space. session. A link between two nodes in a VTAM network. session protocols. The available set of SNA communication requests and responses. shared communications area (SCA). A coupling facility list structure that a DB2 data sharing group uses for inter-DB2 communication. share lock. A lock that prevents concurrently executing application processes from changing data, but not from reading data. Contrast with exclusive lock.
| |
secondary index. A nonpartitioning index on a partitioned table. section. The segment of a plan or package that contains the executable structures for a single SQL statement. For most SQL statements, one section in the plan exists for each SQL statement in the source program. However, for cursor-related statements, the DECLARE, OPEN, FETCH, and CLOSE statements reference the same section because they each refer to the SELECT statement that is named in the DECLARE CURSOR statement. SQL statements such as COMMIT, ROLLBACK, and some SET statements do not use a section. segment. A group of pages that holds rows of a single table. See also segmented table space. segmented table space. A table space that is divided into equal-sized groups of pages called segments. Segments are assigned to tables so that rows of different tables are never stored in the same segment.
Glossary
1467
shift-in character. A special control character (X'0F') that is used in EBCDIC systems to denote that the subsequent bytes represent SBCS characters. See also shift-out character. shift-out character. A special control character (X'0E') that is used in EBCDIC systems to denote that the subsequent bytes, up to the next shift-in control character, represent DBCS characters. See also shift-in character. sign-on. A request that is made on behalf of an individual CICS or IMS application process by an attachment facility to enable DB2 to verify that it is authorized to use DB2 resources. simple page set. A nonpartitioned page set. A simple page set initially consists of a single data set (page set piece). If and when that data set is extended to 2 GB, another data set is created, and so on, up to a total of 32 data sets. DB2 considers the data sets to be a single contiguous linear address space containing a maximum of 64 GB. Data is stored in the next available location within this address space without regard to any partitioning scheme. simple table space. A table space that is neither partitioned nor segmented. single-byte character set (SBCS). A set of characters in which each character is represented by a single byte. Contrast with double-byte character set or multibyte character set. single-precision floating point number. A 32-bit approximate representation of a real number. size. In the C language, the total number of digits in a decimal number (called the precision in SQL). The DB2 library uses the SQL term. SMF. System Management Facilities. SMP/E. System Modification Program/Extended. SMS. Storage Management Subsystem. SNA. Systems Network Architecture. SNA network. The part of a network that conforms to the formats and protocols of Systems Network Architecture (SNA). socket. A callable TCP/IP programming interface that TCP/IP network applications use to communicate with remote TCP/IP partners. sourced function. A function that is implemented by another built-in or user-defined function that is already known to the database manager. This function can be a scalar function or a column (aggregating) function; it returns a single value from a set of values (for example, MAX or AVG). Contrast with built-in function, external function, and SQL function.
source program. A set of host language statements and SQL statements that is processed by an SQL precompiler.
| |
source table. A table that can be a base table, a view, a table expression, or a user-defined table function. source type. An existing type that DB2 uses to internally represent a distinct type. space. A sequence of one or more blank characters. special register. A storage area that DB2 defines for an application process to use for storing information that can be referenced in SQL statements. Examples of special registers are USER and CURRENT DATE. specific function name. A particular user-defined function that is known to the database manager by its specific name. Many specific user-defined functions can have the same function name. When a user-defined function is defined to the database, every function is assigned a specific name that is unique within its schema. Either the user can provide this name, or a default name is used. SPUFI. SQL Processor Using File Input. SQL. Structured Query Language. SQL authorization ID (SQL ID). The authorization ID that is used for checking dynamic SQL statements in some situations. SQLCA. SQL communication area. SQL communication area (SQLCA). A structure that is used to provide an application program with information about the execution of its SQL statements. SQL connection. An association between an application process and a local or remote application server or database server. SQLDA. SQL descriptor area. SQL descriptor area (SQLDA). A structure that describes input variables, output variables, or the columns of a result table. SQL escape character. The symbol that is used to enclose an SQL delimited identifier. This symbol is the double quotation mark ("). See also escape character. SQL function. A user-defined function in which the CREATE FUNCTION statement contains the source code. The source code is a single SQL expression that evaluates to a single value. The SQL user-defined function can return only one parameter. SQL ID. SQL authorization ID. SQLJ. Structured Query Language (SQL) that is embedded in the Java programming language.
1468
Administration Guide
SQL path. An ordered list of schema names that are used in the resolution of unqualified references to user-defined functions, distinct types, and stored procedures. In dynamic SQL, the current path is found in the CURRENT PATH special register. In static SQL, it is defined in the PATH bind option. SQL procedure. A user-written program that can be invoked with the SQL CALL statement. Contrast with external procedure. SQL processing conversation. Any conversation that requires access of DB2 data, either through an application or by dynamic query requests. SQL Processor Using File Input (SPUFI). A facility of the TSO attachment subcomponent that enables the DB2I user to execute SQL statements without embedding them in an application program. SQL return code. Either SQLCODE or SQLSTATE. SQL routine. A user-defined function or stored procedure that is based on code that is written in SQL. SQL statement coprocessor. An alternative to the DB2 precompiler that lets the user process SQL statements at compile time. The user invokes an SQL statement coprocessor by specifying a compiler option. SQL string delimiter. A symbol that is used to enclose an SQL string constant. The SQL string delimiter is the apostrophe ('), except in COBOL applications, where the user assigns the symbol, which is either an apostrophe or a double quotation mark ("). SRB. Service request block. SSI. Subsystem interface (in z/OS). SSM. Subsystem member (in IMS). stand-alone. An attribute of a program that means that it is capable of executing separately from DB2, without using DB2 services. star join. A method of joining a dimension column of a fact table to the key column of the corresponding dimension table. See also join, dimension, and star schema. star schema. The combination of a fact table (which contains most of the data) and a number of dimension tables. See also star join, dimension, and dimension table. statement handle. In DB2 ODBC, the data object that contains information about an SQL statement that is managed by DB2 ODBC. This includes information such as dynamic arguments, bindings for dynamic arguments and columns, cursor information, result values, and status information. Each statement handle is associated with the connection handle.
statement string. For a dynamic SQL statement, the character string form of the statement. statement trigger. A trigger that is defined with the trigger granularity FOR EACH STATEMENT.
| | | |
static cursor. A named control structure that does not change the size of the result table or the order of its rows after an application opens the cursor. Contrast with dynamic cursor. static SQL. SQL statements, embedded within a program, that are prepared during the program preparation process (before the program is executed). After being prepared, the SQL statement does not change (although values of host variables that are specified by the statement might change). storage group. A named set of disks on which DB2 data can be stored. stored procedure. A user-written application program that can be invoked through the use of the SQL CALL statement. string. See character string or graphic string. strong typing. A process that guarantees that only user-defined functions and operations that are defined on a distinct type can be applied to that type. For example, you cannot directly compare two currency types, such as Canadian dollars and U.S. dollars. But you can provide a user-defined function to convert one currency to the other and then do the comparison. structure. (1) A name that refers collectively to different types of DB2 objects, such as tables, databases, views, indexes, and table spaces. (2) A construct that uses z/OS to map and manage storage on a coupling facility. See also cache structure, list structure, or lock structure. Structured Query Language (SQL). A standardized language for defining and manipulating data in a relational database. structure owner. In relation to group buffer pools, the DB2 member that is responsible for the following activities: v Coordinating rebuild, checkpoint, and damage assessment processing v Monitoring the group buffer pool threshold and notifying castout owners when the threshold has been reached subcomponent. A group of closely related DB2 modules that work together to provide a general function. subject table. The table for which a trigger is created. When the defined triggering event occurs on this table, the trigger is activated.
Glossary
1469
subpage. The unit into which a physical index page can be divided. subquery. A SELECT statement within the WHERE or HAVING clause of another SQL statement; a nested SQL statement. subselect. That form of a query that does not include an ORDER BY clause, an UPDATE clause, or UNION operators. substitution character. A unique character that is substituted during character conversion for any characters in the source program that do not have a match in the target coding representation. subsystem. A distinct instance of a relational database management system (RDBMS). surrogate pair. A coded representation for a single character that consists of a sequence of two 16-bit code units, in which the first value of the pair is a high-surrogate code unit in the range U+D800 through U+DBFF, and the second value is a low-surrogate code unit in the range U+DC00 through U+DFFF. Surrogate pairs provide an extension mechanism for encoding 917 476 characters without requiring the use of 32-bit characters. SVC dump. A dump that is issued when a z/OS or a DB2 functional recovery routine detects an error. sync point. See commit point. syncpoint tree. The tree of recovery managers and resource managers that are involved in a logical unit of work, starting with the recovery manager, that make the final commit decision. synonym. In SQL, an alternative name for a table or view. Synonyms can be used to refer only to objects at the subsystem in which the synonym is defined. syntactic character set. A set of 81 graphic characters that are registered in the IBM registry as character set 00640. This set was originally recommended to the programming language community to be used for syntactic purposes toward maximizing portability and interchangeability across systems and country boundaries. It is contained in most of the primary registered character sets, with a few exceptions. See also invariant character set. Sysplex. See Parallel Sysplex. Sysplex query parallelism. Parallel execution of a single query that is accomplished by using multiple tasks on more than one DB2 subsystem. See also query CP parallelism. system administrator. The person at a computer installation who designs, controls, and manages the use of the computer system.
system agent. A work request that DB2 creates internally such as prefetch processing, deferred writes, and service tasks. system conversation. The conversation that two DB2 subsystems must establish to process system messages before any distributed processing can begin. system diagnostic work area (SDWA). The data that is recorded in a SYS1.LOGREC entry that describes a program or hardware error. system-directed connection. A connection that a relational DBMS manages by processing SQL statements with three-part names. System Modification Program/Extended (SMP/E). A z/OS tool for making software changes in programming systems (such as DB2) and for controlling those changes. Systems Network Architecture (SNA). The description of the logical structure, formats, protocols, and operational sequences for transmitting information through and controlling the configuration and operation of networks. SYS1.DUMPxx data set. A data set that contains a system dump (in z/OS). SYS1.LOGREC. A service aid that contains important information about program and hardware errors (in z/OS).
T
table. A named data object consisting of a specific number of columns and some number of unordered rows. See also base table or temporary table.
| | | | | |
table-controlled partitioning. A type of partitioning in which partition boundaries for a partitioned table are controlled by values that are defined in the CREATE TABLE statement. Partition limits are saved in the LIMITKEY_INTERNAL column of the SYSIBM.SYSTABLEPART catalog table. table function. A function that receives a set of arguments and returns a table to the SQL statement that references the function. A table function can be referenced only in the FROM clause of a subselect. table locator. A mechanism that allows access to trigger transition tables in the FROM clause of SELECT statements, in the subselect of INSERT statements, or from within user-defined functions. A table locator is a fullword integer value that represents a transition table. table space. A page set that is used to store the records in one or more tables.
1470
Administration Guide
table space set. A set of table spaces and partitions that should be recovered together for one of these reasons: v Each of them contains a table that is a parent or descendent of a table in one of the others. v The set contains a base table and associated auxiliary tables. A table space set can contain both types of relationships. task control block (TCB). A z/OS control block that is used to communicate information about tasks within an address space that are connected to DB2. See also address space connection. TB. Terabyte (1 099 511 627 776 bytes). TCB. Task control block (in z/OS). TCP/IP. A network communication protocol that computer systems use to exchange information across telecommunication links. TCP/IP port. A 2-byte value that identifies an end user or a TCP/IP network application within a TCP/IP host. template. A DB2 utilities output data set descriptor that is used for dynamic allocation. A template is defined by the TEMPLATE utility control statement. temporary table. A table that holds temporary data. Temporary tables are useful for holding or sorting intermediate results from queries that contain a large number of rows. The two types of temporary table, which are created by different SQL statements, are the created temporary table and the declared temporary table. Contrast with result table. See also created temporary table and declared temporary table. Terminal Monitor Program (TMP). A program that provides an interface between terminal users and command processors and has access to many system services (in z/OS). thread. The DB2 structure that describes an applications connection, traces its progress, processes resource functions, and delimits its accessibility to DB2 resources and services. Most DB2 functions execute under a thread structure. See also allied thread and database access thread. threadsafe. A characteristic of code that allows multithreading both by providing private storage areas for each thread, and by properly serializing shared (global) storage areas. three-part name. The full name of a table, view, or alias. It consists of a location name, authorization ID, and an object name, separated by a period. time. A three-part value that designates a time of day in hours, minutes, and seconds.
time duration. A decimal integer that represents a number of hours, minutes, and seconds. timeout. Abnormal termination of either the DB2 subsystem or of an application because of the unavailability of resources. Installation specifications are set to determine both the amount of time DB2 is to wait for IRLM services after starting, and the amount of time IRLM is to wait if a resource that an application requests is unavailable. If either of these time specifications is exceeded, a timeout is declared. Time-Sharing Option (TSO). An option in MVS that provides interactive time sharing from remote terminals. timestamp. A seven-part value that consists of a date and time. The timestamp is expressed in years, months, days, hours, minutes, seconds, and microseconds. TMP. Terminal Monitor Program. to-do. A state of a unit of recovery that indicates that the unit of recoverys changes to recoverable DB2 resources are indoubt and must either be applied to the disk media or backed out, as determined by the commit coordinator. trace. A DB2 facility that provides the ability to monitor and collect DB2 monitoring, auditing, performance, accounting, statistics, and serviceability (global) data. transaction lock. A lock that is used to control concurrent execution of SQL statements. transaction program name. In SNA LU 6.2 conversations, the name of the program at the remote logical unit that is to be the other half of the conversation.
| transient XML data type. A data type for XML values | that exists only during query processing.
transition table. A temporary table that contains all the affected rows of the subject table in their state before or after the triggering event occurs. Triggered SQL statements in the trigger definition can reference the table of changed rows in the old state or the new state. transition variable. A variable that contains a column value of the affected row of the subject table in its state before or after the triggering event occurs. Triggered SQL statements in the trigger definition can reference the set of old values or the set of new values.
| tree structure. A data structure that represents entities | in nodes, with a most one parent node for each node, | and with only one root node.
Glossary
1471
trigger. A set of SQL statements that are stored in a DB2 database and executed when a certain event occurs in a DB2 table. trigger activation. The process that occurs when the trigger event that is defined in a trigger definition is executed. Trigger activation consists of the evaluation of the triggered action condition and conditional execution of the triggered SQL statements. trigger activation time. An indication in the trigger definition of whether the trigger should be activated before or after the triggered event. trigger body. The set of SQL statements that is executed when a trigger is activated and its triggered action condition evaluates to true. A trigger body is also called triggered SQL statements. trigger cascading. The process that occurs when the triggered action of a trigger causes the activation of another trigger. triggered action. The SQL logic that is performed when a trigger is activated. The triggered action consists of an optional triggered action condition and a set of triggered SQL statements that are executed only if the condition evaluates to true. triggered action condition. An optional part of the triggered action. This Boolean condition appears as a WHEN clause and specifies a condition that DB2 evaluates to determine if the triggered SQL statements should be executed. triggered SQL statements. The set of SQL statements that is executed when a trigger is activated and its triggered action condition evaluates to true. Triggered SQL statements are also called the trigger body. trigger granularity. A characteristic of a trigger, which determines whether the trigger is activated: v Only once for the triggering SQL statement v Once for each row that the SQL statement modifies triggering event. The specified operation in a trigger definition that causes the activation of that trigger. The triggering event is comprised of a triggering operation (INSERT, UPDATE, or DELETE) and a subject table on which the operation is performed. triggering SQL operation. The SQL operation that causes a trigger to be activated when performed on the subject table. trigger package. A package that is created when a CREATE TRIGGER statement is executed. The package is executed when the trigger is activated. TSO. Time-Sharing Option. TSO attachment facility. A DB2 facility consisting of the DSN command processor and DB2I. Applications
that are not written for the CICS or IMS environments can run under the TSO attachment facility. typed parameter marker. A parameter marker that is specified along with its target data type. It has the general form: CAST(? AS data-type) type 1 indexes. Indexes that were created by a release of DB2 before DB2 Version 4 or that are specified as type 1 indexes in Version 4. Contrast with type 2 indexes. As of Version 8, type 1 indexes are no longer supported. type 2 indexes. Indexes that are created on a release of DB2 after Version 7 or that are specified as type 2 indexes in Version 4 or later.
U
UCS-2. Universal Character Set, coded in 2 octets, which means that characters are represented in 16-bits per character. UDF. User-defined function. UDT. User-defined data type. In DB2 UDB for z/OS, the term distinct type is used instead of user-defined data type. See distinct type. uncommitted read (UR). The isolation level that allows an application to read uncommitted data. underlying view. The view on which another view is directly or indirectly defined. undo. A state of a unit of recovery that indicates that the changes that the unit of recovery made to recoverable DB2 resources must be backed out. Unicode. A standard that parallels the ISO-10646 standard. Several implementations of the Unicode standard exist, all of which have the ability to represent a large percentage of the characters that are contained in the many scripts that are used throughout the world. uniform resource locator (URL). A Web address, which offers a way of naming and locating specific items on the Web. union. An SQL operation that combines the results of two SELECT statements. Unions are often used to merge lists of values that are obtained from several tables. unique constraint. An SQL rule that no two values in a primary key, or in the key of a unique index, can be the same. unique index. An index that ensures that no identical key values are stored in a column or a set of columns in a table.
1472
Administration Guide
unit of recovery. A recoverable sequence of operations within a single resource manager, such as an instance of DB2. Contrast with unit of work. unit of recovery identifier (URID). The LOGRBA of the first log record for a unit of recovery. The URID also appears in all subsequent log records for that unit of recovery. unit of work. A recoverable sequence of operations within an application process. At any time, an application process is a single unit of work, but the life of an application process can involve many units of work as a result of commit or rollback operations. In a multisite update operation, a single unit of work can include several units of recovery. Contrast with unit of recovery. Universal Unique Identifier (UUID). An identifier that is immutable and unique across time and space (in z/OS). unlock. The act of releasing an object or system resource that was previously locked and returning it to general availability within DB2. untyped parameter marker. A parameter marker that is specified without its target data type. It has the form of a single question mark (?). updatability. The ability of a cursor to perform positioned updates and deletes. The updatability of a cursor can be influenced by the SELECT statement and the cursor sensitivity option that is specified on the DECLARE CURSOR statement. update hole. The location on which a cursor is positioned when a row in a result table is fetched again and the new values no longer satisfy the search condition. DB2 marks a row in the result table as an update hole when an update to the corresponding row in the database causes that row to no longer qualify for the result table. update trigger. A trigger that is defined with the triggering SQL operation UPDATE. upstream. The node in the syncpoint tree that is responsible, in addition to other recovery or resource managers, for coordinating the execution of a two-phase commit. UR. Uncommitted read. URE. Unit of recovery element. URID . Unit of recovery identifier. URL. Uniform resource locator. user-defined data type (UDT). See distinct type. user-defined function (UDF). A function that is defined to DB2 by using the CREATE FUNCTION
statement and that can be referenced thereafter in SQL statements. A user-defined function can be an external function, a sourced function, or an SQL function. Contrast with built-in function. user view. In logical data modeling, a model or representation of critical information that the business requires. UTF-8. Unicode Transformation Format, 8-bit encoding form, which is designed for ease of use with existing ASCII-based systems. The CCSID value for data in UTF-8 format is 1208. DB2 UDB for z/OS supports UTF-8 in mixed data fields. UTF-16. Unicode Transformation Format, 16-bit encoding form, which is designed to provide code values for over a million characters and a superset of UCS-2. The CCSID value for data in UTF-16 format is 1200. DB2 UDB for z/OS supports UTF-16 in graphic data fields. UUID. Universal Unique Identifier.
V
value. The smallest unit of data that is manipulated in SQL. variable. A data element that specifies a value that can be changed. A COBOL elementary data item is an example of a variable. Contrast with constant. variant function. See nondeterministic function. varying-length string. A character or graphic string whose length varies within set limits. Contrast with fixed-length string. version. A member of a set of similar programs, DBRMs, packages, or LOBs. A version of a program is the source code that is produced by precompiling the program. The program version is identified by the program name and a timestamp (consistency token). A version of a DBRM is the DBRM that is produced by precompiling a program. The DBRM version is identified by the same program name and timestamp as a corresponding program version. A version of a package is the result of binding a DBRM within a particular database system. The package version is identified by the same program name and consistency token as the DBRM. A version of a LOB is a copy of a LOB value at a point in time. The version number for a LOB is stored in the auxiliary index entry for the LOB. view. An alternative representation of data from one or more tables. A view can include all or some of the columns that are contained in tables on which it is defined.
Glossary
1473
view check option. An option that specifies whether every row that is inserted or updated through a view must conform to the definition of that view. A view check option can be specified with the WITH CASCADED CHECK OPTION, WITH CHECK OPTION, or WITH LOCAL CHECK OPTION clauses of the CREATE VIEW statement. Virtual Storage Access Method (VSAM). An access method for direct or sequential processing of fixed- and varying-length records on disk devices. The records in a VSAM data set or file can be organized in logical sequence by a key field (key sequence), in the physical sequence in which they are written on the data set or file (entry-sequence), or by relative-record number (in z/OS). Virtual Telecommunications Access Method (VTAM). An IBM licensed program that controls communication and the flow of data in an SNA network (in z/OS).
| XML attribute. A name-value pair within a tagged | XML element that modifies certain features of the | element. # # # #
XML element. A logical structure in an XML document that is delimited by a start and an end tag. Anything between the start tag and the end tag is the content of the element.
| XML node. The smallest unit of valid, complete | structure in a document. For example, a node can | represent an element, an attribute, or a text string. | XML publishing functions. Functions that return | XML values from SQL values.
X/Open. An independent, worldwide open systems organization that is supported by most of the worlds largest information systems suppliers, user organizations, and software companies. X/Open's goal is to increase the portability of applications by combining existing and emerging standards. XRF. Extended recovery facility.
| volatile table. A table for which SQL operations | choose index access whenever possible.
VSAM. Virtual Storage Access Method. VTAM. Virtual Telecommunication Access Method (in z/OS).
Z
| z/OS. An operating system for the eServer product | line that supports 64-bit real and virtual storage.
z/OS Distributed Computing Environment (z/OS DCE). A set of technologies that are provided by the Open Software Foundation to implement distributed computing.
W
warm start. The normal DB2 restart process, which involves reading and processing log records so that data that is under the control of DB2 is consistent. Contrast with cold start. WLM application environment. A z/OS Workload Manager attribute that is associated with one or more stored procedures. The WLM application environment determines the address space in which a given DB2 stored procedure runs. write to operator (WTO). An optional user-coded service that allows a message to be written to the system console operator informing the operator of errors and unusual system conditions that might need to be corrected (in z/OS). WTO. Write to operator. WTOR. Write to operator (WTO) with reply.
X
XCF. See cross-system coupling facility. XES. See cross-system extended services.
1474
Administration Guide
Bibliography
DB2 Universal Database for z/OS Version 8 product information: v DB2 Administration Guide, SC18-7413 v DB2 Application Programming and SQL Guide, SC18-7415 v DB2 Application Programming Guide and Reference for Java, SC18-7414 v DB2 Codes, GC18-9603 v DB2 Command Reference, SC18-7416 v DB2 Common Criteria Guide, SC18-9672 v DB2 Data Sharing: Planning and Administration, SC18-7417 v DB2 Diagnosis Guide and Reference, LY37-3201 v DB2 Diagnostic Quick Reference Card, LY37-3202 v DB2 Image, Audio, and Video Extenders Administration and Programming, SC26-9947 v DB2 Installation Guide, GC18-7418 v DB2 Licensed Program Specifications, GC18-7420 v DB2 Management Clients Package Program Directory, GI10-8567 v DB2 Messages, GC18-9602 v DB2 ODBC Guide and Reference, SC18-7423 v The Official Introduction to DB2 UDB for z/OS v DB2 Program Directory, GI10-8566 v DB2 RACF Access Control Module Guide, SC18-7433 v DB2 Reference for Remote DRDA Requesters and Servers, SC18-7424 v DB2 Reference Summary, SX26-3853 v DB2 Release Planning Guide, SC18-7425 v DB2 SQL Reference, SC18-7426 v DB2 Text Extender Administration and Programming, SC26-9948 v DB2 Utility Guide and Reference, SC18-7427 v DB2 What's New?, GC18-7428 v DB2 XML Extender for z/OS Administration and Programming, SC18-7431 Books and resources about related products: APL2 v APL2 Programming Guide, SH21-1072 v APL2 Programming: Language Reference, SH21-1061
Copyright IBM Corp. 1982, 2009
v APL2 Programming: Using Structured Query Language (SQL), SH21-1057 BookManager READ/MVS v BookManager READ/MVS V1R3: Installation Planning & Customization, SC38-2035 C language: IBM C/C++ for z/OS v z/OS C/C++ Programming Guide, SC09-4765 v z/OS C/C++ Run-Time Library Reference, SA22-7821 Character Data Representation Architecture v Character Data Representation Architecture Overview, GC09-2207 v Character Data Representation Architecture Reference and Registry, SC09-2190 CICS Transaction Server for z/OS The publication order numbers below are for Version 2 Release 2 and Version 2 Release 3 (with the release 2 number listed first). v CICS Transaction Server for z/OS Information Center, SK3T-6903 or SK3T-6957. v CICS Transaction Server for z/OS Application Programming Guide, SC34-5993 or SC34-6231 v CICS Transaction Server for z/OS Application Programming Reference, SC34-5994 or SC34-6232 v CICS Transaction Server for z/OS CICS-RACF Security Guide, SC34-6011 or SC34-6249 v CICS Transaction Server for z/OS CICS Supplied Transactions, SC34-5992 or SC34-6230 v CICS Transaction Server for z/OS Customization Guide, SC34-5989 or SC34-6227 v CICS Transaction Server for z/OS Data Areas, LY33-6100 or LY33-6103 v CICS Transaction Server for z/OS DB2 Guide, SC34-6014 or SC34-6252 v CICS Transaction Server for z/OS External Interfaces Guide, SC34-6006 or SC34-6244 v CICS Transaction Server for z/OS Installation Guide, GC34-5985 or GC34-6224 v CICS Transaction Server for z/OS Intercommunication Guide, SC34-6005 or SC34-6243 v CICS Transaction Server for z/OS Messages and Codes, GC34-6003 or GC34-6241 v CICS Transaction Server for z/OS Operations and Utilities Guide, SC34-5991 or SC34-6229
1475
v CICS Transaction Server for z/OS Performance Guide, SC34-6009 or SC34-6247 v CICS Transaction Server for z/OS Problem Determination Guide, SC34-6002 or SC34-6239 v CICS Transaction Server for z/OS Release Guide, GC34-5983 or GC34-6218 v CICS Transaction Server for z/OS Resource Definition Guide, SC34-5990 or SC34-6228 v CICS Transaction Server for z/OS System Definition Guide, SC34-5988 or SC346226 v CICS Transaction Server for z/OS System Programming Reference, SC34-5595 or SC346233 CICS Transaction Server for OS/390 v CICS Transaction Server for OS/390 Application Programming Guide, SC33-1687 v CICS Transaction Server for OS/390 DB2 Guide, SC33-1939 v CICS Transaction Server for OS/390 External Interfaces Guide, SC33-1944 v CICS Transaction Server for OS/390 Resource Definition Guide, SC33-1684 COBOL: v IBM COBOL Language Reference, SC27-1408 v Enterprise COBOL for z/OS Programming Guide, SC27-1412 Database Design v DB2 for z/OS and OS/390 Development for Performance Volume I by Gabrielle Wiorkowski, Gabrielle & Associates, ISBN 0-96684-605-2 v DB2 for z/OS and OS/390 Development for Performance Volume II by Gabrielle Wiorkowski, Gabrielle & Associates, ISBN 0-96684-606-0 v Handbook of Relational Database Design by C. Fleming and B. Von Halle, Addison Wesley, ISBN 0-20111-434-8 DB2 Administration Tool v DB2 Administration Tool for z/OS User's Guide and Reference, available on the Web at www.ibm.com/software/data/db2imstools/ library.html DB2 Buffer Pool Analyzer for z/OS v DB2 Buffer Pool Tool for z/OS User's Guide and Reference, available on the Web at www.ibm.com/software/data/db2imstools/ library.html DB2 Connect v IBM DB2 Connect Quick Beginnings for DB2 Connect Enterprise Edition, GC09-4833
v IBM DB2 Connect Quick Beginnings for DB2 Connect Personal Edition, GC09-4834 v IBM DB2 Connect User's Guide, SC09-4835 DB2 DataPropagator v DB2 Universal Database Replication Guide and Reference, SC27-1121 DB2 Performance Expert for z/OS, Version 1 The following books are part of the DB2 Performance Expert library. Some of these books include information about the following tools: IBM DB2 Performance Expert for z/OS; IBM DB2 Performance Monitor for z/OS; and DB2 Buffer Pool Analyzer for z/OS. v OMEGAMON Buffer Pool Analyzer User's Guide, SC18-7972 v OMEGAMON Configuration and Customization, SC18-7973 v OMEGAMON Messages, SC18-7974 v OMEGAMON Monitoring Performance from ISPF, SC18-7975 v OMEGAMON Monitoring Performance from Performance Expert Client, SC18-7976 v OMEGAMON Program Directory, GI10-8549 v OMEGAMON Report Command Reference, SC18-7977 v OMEGAMON Report Reference, SC18-7978 v Using IBM Tivoli OMEGAMON XE on z/OS, SC18-7979 DB2 Query Management Facility (QMF) Version 8.1 v DB2 Query Management Facility: DB2 QMF High Performance Option Users Guide for TSO/CICS, SC18-7450 v DB2 Query Management Facility: DB2 QMF Messages and Codes, GC18-7447 v DB2 Query Management Facility: DB2 QMF Reference, SC18-7446 v DB2 Query Management Facility: Developing DB2 QMF Applications, SC18-7651 v DB2 Query Management Facility: Getting Started with DB2 QMF for Windows and DB2 QMF for WebSphere, SC18-7449 v DB2 Query Management Facility: Getting Started with DB2 QMF Query Miner, GC18-7451 v DB2 Query Management Facility: Installing and Managing DB2 QMF for TSO/CICS, GC18-7444 v DB2 Query Management Facility: Installing and Managing DB2 QMF for Windows and DB2 QMF for WebSphere, GC18-7448
1476
Administration Guide
v DB2 Query Management Facility: Introducing DB2 QMF, GC18-7443 v DB2 Query Management Facility: Using DB2 QMF, SC18-7445 v DB2 Query Management Facility: DB2 QMF Visionary Developer's Guide, SC18-9093 v DB2 Query Management Facility: DB2 QMF Visionary Getting Started Guide, GC18-9092 DB2 Redbooks For access to all IBM Redbooks about DB2, see the IBM Redbooks Web page at www.ibm.com/redbooks DB2 Server for VSE & VM v DB2 Server for VM: DBS Utility, SC09-2983 DB2 Universal Database Cross-Platform information v IBM DB2 Universal Database SQL Reference for Cross-Platform Development, available at www.ibm.com/software/data/ developer/cpsqlref/ DB2 Universal Database for iSeries The following books are available at www.ibm.com/iseries/infocenter v DB2 Universal Database for iSeries Performance and Query Optimization v DB2 Universal Database for iSeries Database Programming v DB2 Universal Database for iSeries SQL Programming Concepts v DB2 Universal Database for iSeries SQL Programming with Host Languages v DB2 Universal Database for iSeries SQL Reference v DB2 Universal Database for iSeries Distributed Data Management v DB2 Universal Database for iSeries Distributed Database Programming DB2 Universal Database for Linux, UNIX, and Windows: v DB2 Universal Database Administration Guide: Planning, SC09-4822 v DB2 Universal Database Administration Guide: Implementation, SC09-4820 v DB2 Universal Database Administration Guide: Performance, SC09-4821 v DB2 Universal Database Administrative API Reference, SC09-4824 v DB2 Universal Database Application Development Guide: Building and Running Applications, SC09-4825
v DB2 Universal Database Call Level Interface Guide and Reference, Volumes 1 and 2, SC09-4849 and SC09-4850 v DB2 Universal Database Command Reference, SC09-4828 v DB2 Universal Database SQL Reference Volume 1, SC09-4844 v DB2 Universal Database SQL Reference Volume 2, SC09-4845 Device Support Facilities v Device Support Facilities User's Guide and Reference, GC35-0033 DFSMS These books provide information about a variety of components of DFSMS, including z/OS DFSMS, z/OS DFSMSdfp, z/OS DFSMSdss, z/OS DFSMShsm, and z/OS DFP. v z/OS DFSMS Access Method Services for Catalogs, SC26-7394 v z/OS DFSMSdss Storage Administration Guide, SC35-0423 v z/OS DFSMSdss Storage Administration Reference, SC35-0424 v z/OS DFSMShsm Managing Your Own Data, SC35-0420 v z/OS DFSMSdfp: Using DFSMSdfp in the z/OS Environment, SC26-7473 v z/OS DFSMSdfp Diagnosis Reference, GY27-7618 v z/OS DFSMS: Implementing System-Managed Storage, SC27-7407 v z/OS DFSMS: Macro Instructions for Data Sets, SC26-7408 v z/OS DFSMS: Managing Catalogs, SC26-7409 v z/OS MVS: Program Management User's Guide and Reference, SA22-7643 v z/OS MVS Program Management: Advanced Facilities, SA22-7644 v z/OS DFSMSdfp Storage Administration Reference, SC26-7402 v z/OS DFSMS: Using Data Sets, SC26-7410 v DFSMSdfp Advanced Services , SC26-7400 v DFSMS/MVS: Utilities, SC26-7414 DFSORT v DFSORT Application Programming: Guide, SC33-4035 v DFSORT Installation and Customization, SC33-4034 Distributed Relational Database Architecture
Bibliography
1477
v Open Group Technical Standard; the Open Group presently makes the following DRDA books available through its Web site at www.opengroup.org Open Group Technical Standard, DRDA Version 3 Vol. 1: Distributed Relational Database Architecture Open Group Technical Standard, DRDA Version 3 Vol. 2: Formatted Data Object Content Architecture Open Group Technical Standard, DRDA Version 3 Vol. 3: Distributed Data Management Architecture Domain Name System v DNS and BIND, Third Edition, Paul Albitz and Cricket Liu, OReilly, ISBN 0-59600-158-4 Education v Information about IBM educational offerings is available on the Web at http://www.ibm.com/ software/sw-training/ v A collection of glossaries of IBM terms is available on the IBM Terminology Web site at www.ibm.com/ibm/terminology/index.html eServer zSeries v IBM eServer zSeries Processor Resource/System Manager Planning Guide, SB10-7033 Fortran: VS Fortran v VS Fortran Version 2: Language and Library Reference, SC26-4221 v VS Fortran Version 2: Programming Guide for CMS and MVS, SC26-4222 High Level Assembler v High Level Assembler for MVS and VM and VSE Language Reference, SC26-4940 v High Level Assembler for MVS and VM and VSE Programmer's Guide, SC26-4941 ICSF v z/OS ICSF Overview, SA22-7519 v Integrated Cryptographic Service Facility Administrator's Guide, SA22-7521 IMS Version 8 IMS product information is available on the IMS Library Web page, which you can find at www.ibm.com/ims v IMS Administration Guide: System, SC27-1284 v IMS Administration Guide: Transaction Manager, SC27-1285
v IMS Application Programming: Database Manager, SC27-1286 v IMS Application Programming: Design Guide, SC27-1287 v IMS Application Programming: Transaction Manager, SC27-1289 v IMS Command Reference, SC27-1291 v IMS Customization Guide, SC27-1294 v IMS Install Volume 1: Installation Verification, GC27-1297 v IMS Install Volume 2: System Definition and Tailoring, GC27-1298 v IMS Messages and Codes Volumes 1 and 2, GC27-1301 and GC27-1302 v IMS Open Transaction Manager Access Guide and Reference, SC18-7829 v IMS Utilities Reference: System, SC27-1309 General information about IMS Batch Terminal Simulator for z/OS is available on the Web at www.ibm.com/software/data/db2imstools/ library.html IMS DataPropagator v IMS DataPropagator for z/OS Administrator's Guide for Log, SC27-1216 v IMS DataPropagator: An Introduction, GC27-1211 v IMS DataPropagator for z/OS Reference, SC27-1210 ISPF v z/OS ISPF Dialog Developers Guide, SC23-4821 v z/OS ISPF Messages and Codes, SC34-4815 v z/OS ISPF Planning and Customizing, GC34-4814 v z/OS ISPF Users Guide Volumes 1 and 2, SC34-4822 and SC34-4823 Language Environment v Debug Tool User's Guide and Reference, SC18-7171 v Debug Tool for z/OS and OS/390 Reference and Messages, SC18-7172 v z/OS Language Environment Concepts Guide, SA22-7567 v z/OS Language Environment Customization, SA22-7564 v z/OS Language Environment Debugging Guide, GA22-7560 v z/OS Language Environment Programming Guide, SA22-7561 v z/OS Language Environment Programming Reference, SA22-7562 MQSeries v MQSeries Application Messaging Interface, SC34-5604
1478
Administration Guide
v MQSeries for OS/390 Concepts and Planning Guide, GC34-5650 v MQSeries for OS/390 System Setup Guide, SC34-5651 National Language Support v National Language Design Guide Volume 1, SE09-8001 v IBM National Language Support Reference Manual Volume 2, SE09-8002 NetView v Tivoli NetView for z/OS Installation: Getting Started, SC31-8872 v Tivoli NetView for z/OS User's Guide, GC31-8849 Microsoft ODBC Information about Microsoft ODBC is available at http://msdn.microsoft.com/library/ Parallel Sysplex Library v System/390 9672 Parallel Transaction Server, 9672 Parallel Enterprise Server, 9674 Coupling Facility System Overview For R1/R2/R3 Based Models, SB10-7033 v z/OS Parallel Sysplex Application Migration, SA22-7662 v z/OS Parallel Sysplex Overview: An Introduction to Data Sharing and Parallelism, SA22-7661 v z/OS Parallel Sysplex Test Report, SA22-7663 The Parallel Sysplex Configuration Assistant is available at www.ibm.com/s390/pso/psotool PL/I: Enterprise PL/I for z/OS v IBM Enterprise PL/I for z/OS Language Reference, SC27-1460 v IBM Enterprise PL/I for z/OS Programming Guide, SC27-1457 PL/I: PL/I for MVS & VM v PL/I for MVS & VM Programming Guide, SC26-3113 SMP/E v SMP/E for z/OS and OS/390 Reference, SA22-7772 v SMP/E for z/OS and OS/390 User's Guide, SA22-7773 Storage Management v z/OS DFSMS: Implementing System-Managed Storage, SC26-7407 v MVS/ESA Storage Management Library: Managing Data, SC26-7397
v MVS/ESA Storage Management Library: Managing Storage Groups, SC35-0421 v MVS Storage Management Library: Storage Management Subsystem Migration Planning Guide, GC26-7398 System Network Architecture (SNA) v SNA Formats, GA27-3136 v SNA LU 6.2 Peer Protocols Reference, SC31-6808 v SNA Transaction Programmer's Reference Manual for LU Type 6.2, GC30-3084 v SNA/Management Services Alert Implementation Guide, GC31-6809 TCP/IP v IBM TCP/IP for MVS: Customization & Administration Guide, SC31-7134 v IBM TCP/IP for MVS: Diagnosis Guide, LY43-0105 v IBM TCP/IP for MVS: Messages and Codes, SC31-7132 v IBM TCP/IP for MVS: Planning and Migration Guide, SC31-7189 TotalStorage Enterprise Storage Server v RAMAC Virtual Array: Implementing Peer-to-Peer Remote Copy, SG24-5680 v Enterprise Storage Server Introduction and Planning, GC26-7444 v IBM RAMAC Virtual Array, SG24-6424 Unicode v z/OS Support for Unicode: Using Conversion Services, SA22-7649 Information about Unicode, the Unicode consortium, the Unicode standard, and standards conformance requirements is available at www.unicode.org VTAM v Planning for NetView, NCP, and VTAM, SC31-8063 v VTAM for MVS/ESA Diagnosis, LY43-0078 v VTAM for MVS/ESA Messages and Codes, GC31-8369 v VTAM for MVS/ESA Network Implementation Guide, SC31-8370 v VTAM for MVS/ESA Operation, SC31-8372 v z/OS Communications Server SNA Programming, SC31-8829 v z/OS Communicatons Server SNA Programmer's LU 6.2 Reference, SC31-8810 v VTAM for MVS/ESA Resource Definition Reference, SC31-8377
Bibliography
1479
WebSphere family v WebSphere MQ Integrator Broker: Administration Guide, SC34-6171 v WebSphere MQ Integrator Broker for z/OS: Customization and Administration Guide, SC34-6175 v WebSphere MQ Integrator Broker: Introduction and Planning, GC34-5599 v WebSphere MQ Integrator Broker: Using the Control Center, SC34-6168 z/Architecture v z/Architecture Principles of Operation, SA22-7832 z/OS v z/OS C/C++ Programming Guide, SC09-4765 v z/OS C/C++ Run-Time Library Reference, SA22-7821 v z/OS C/C++ User's Guide, SC09-4767 v z/OS Communications Server: IP Configuration Guide, SC31-8875 v z/OS Communications Server: IP and SNA Codes, SC31-8791 v z/OS DCE Administration Guide, SC24-5904 v z/OS DCE Introduction, GC24-5911 v z/OS DCE Messages and Codes, SC24-5912 v z/OS Information Roadmap, SA22-7500 v z/OS Introduction and Release Guide, GA22-7502 v z/OS JES2 Initialization and Tuning Guide, SA22-7532 v z/OS JES3 Initialization and Tuning Guide, SA22-7549 v z/OS Language Environment Concepts Guide, SA22-7567 v z/OS Language Environment Customization, SA22-7564 v z/OS Language Environment Debugging Guide, GA22-7560 v z/OS Language Environment Programming Guide, SA22-7561 v z/OS Language Environment Programming Reference, SA22-7562 v z/OS Managed System Infrastructure for Setup User's Guide, SC33-7985 v z/OS MVS Diagnosis: Procedures, GA22-7587 v z/OS MVS Diagnosis: Reference, GA22-7588 v z/OS MVS Diagnosis: Tools and Service Aids, GA22-7589 v z/OS MVS Initialization and Tuning Guide, SA22-7591 v z/OS MVS Initialization and Tuning Reference, SA22-7592 v z/OS MVS Installation Exits, SA22-7593 v z/OS MVS JCL Reference, SA22-7597 v z/OS MVS JCL User's Guide, SA22-7598
v z/OS MVS Planning: Global Resource Serialization, SA22-7600 v z/OS MVS Planning: Operations, SA22-7601 v z/OS MVS Planning: Workload Management, SA22-7602 v z/OS MVS Programming: Assembler Services Guide, SA22-7605 v z/OS MVS Programming: Assembler Services Reference, Volumes 1 and 2, SA22-7606 and SA22-7607 v z/OS MVS Programming: Authorized Assembler Services Guide, SA22-7608 v z/OS MVS Programming: Authorized Assembler Services Reference Volumes 1-4, SA22-7609, SA22-7610, SA22-7611, and SA22-7612 v z/OS MVS Programming: Callable Services for High-Level Languages, SA22-7613 v z/OS MVS Programming: Extended Addressability Guide, SA22-7614 v z/OS MVS Programming: Sysplex Services Guide, SA22-7617 v z/OS MVS Programming: Sysplex Services Reference, SA22-7618 v z/OS MVS Programming: Workload Management Services, SA22-7619 v z/OS MVS Recovery and Reconfiguration Guide, SA22-7623 v z/OS MVS Routing and Descriptor Codes, SA22-7624 v z/OS MVS Setting Up a Sysplex, SA22-7625 v z/OS MVS System Codes SA22-7626 v z/OS MVS System Commands, SA22-7627 v z/OS MVS System Messages Volumes 1-10, SA22-7631, SA22-7632, SA22-7633, SA22-7634, SA22-7635, SA22-7636, SA22-7637, SA22-7638, SA22-7639, and SA22-7640 v z/OS MVS Using the Subsystem Interface, SA22-7642 v z/OS Planning for Multilevel Security and the Common Criteria, SA22-7509 v z/OS RMF User's Guide, SC33-7990 v z/OS Security Server Network Authentication Server Administration, SC24-5926 v z/OS Security Server RACF Auditor's Guide, SA22-7684 v z/OS Security Server RACF Command Language Reference, SA22-7687 v z/OS Security Server RACF Macros and Interfaces, SA22-7682 v z/OS Security Server RACF Security Administrator's Guide, SA22-7683 v z/OS Security Server RACF System Programmer's Guide, SA22-7681 v z/OS Security Server RACROUTE Macro Reference, SA22-7692
1480
Administration Guide
v z/OS Support for Unicode: Using Conversion Services, SA22-7649 v z/OS TSO/E CLISTs, SA22-7781 v z/OS TSO/E Command Reference, SA22-7782 v z/OS TSO/E Customization, SA22-7783 v z/OS TSO/E Messages, SA22-7786 v z/OS TSO/E Programming Guide, SA22-7788 v z/OS TSO/E Programming Services, SA22-7789 v z/OS TSO/E REXX Reference, SA22-7790 v z/OS TSO/E User's Guide, SA22-7794 v z/OS UNIX System Services Command Reference, SA22-7802 v z/OS UNIX System Services Messages and Codes, SA22-7807 v z/OS UNIX System Services Planning, GA22-7800 v z/OS UNIX System Services Programming: Assembler Callable Services Reference, SA22-7803 v z/OS UNIX System Services User's Guide, SA22-7801
Bibliography
1481
1482
Administration Guide
Numerics
16-KB page size 113 32-KB page size 113 8-KB page size 113
A
abend AEY9 529 after SQLCODE -923 534 ASP7 529 backward log recovery 613 CICS abnormal termination of application 529 scenario 534 transaction abends when disconnecting from DB2 391 waits 529 current status rebuild 601 disconnects DB2 400 DXR122E 521 effects of 448 forward log recovery 608 IMS U3047 528 U3051 528 IMS, scenario 526, 528 IRLM scenario 521 stop command 382 stop DB2 381 log damage 597 initialization 600 lost information 619 page problem 618 restart 599 starting DB2 after 326 VVDS (VSAM volume data set) destroyed 553 out of space 553 acceptance option 243 access control authorization exit routine 1065 closed application 215, 227 DB2 subsystem local 131, 232 process overview 231 RACF 131 remote 132, 238 external DB2 data sets 132 field level 143 internal DB2 data 129 Copyright IBM Corp. 1982, 2009
390,
443
X-1
active log (continued) size determining 704 tuning considerations 704 truncation 431 writing 430 activity sample table 1035 ADD VOLUMES clause of ALTER STOGROUP statement 68 address space DB2 13 started-task 267 stored procedures 268 ADMIN_COMMAND_DB2 stored procedure 1309 ADMIN_COMMAND_DSN stored procedure 1321 ADMIN_COMMAND_UNIX stored procedure 1324 ADMIN_DB_BROWSE stored procedure 1327 ADMIN_DB_DELETE stored procedure 1331 ADMIN_DS_LIST stored procedure 1333 ADMIN_DS_RENAME stored procedure 1339 ADMIN_DS_SEARCH stored procedure 1342 ADMIN_DS_WRITE stored procedure 1344 ADMIN_INFO_HOST stored procedure 1349 ADMIN_INFO_SSID stored procedure 1352 ADMIN_INFO_SYSPARM stored procedure 1354 ADMIN_JOB_CANCEL stored procedure 1357 ADMIN_JOB_FETCH stored procedure 1360 ADMIN_JOB_QUERY stored procedure 1363 ADMIN_JOB_SUBMIT stored procedure 1366 ADMIN_TASK_ADD 339 ADMIN_TASK_REMOVE 349 ADMIN_UTL_SCHEDULE stored procedure 1369 ADMIN_UTL_SORT stored procedure 1379 administrative authority 138 administrative task scheduler adding a task 335 ADMIN_TASK_ADD 339 ADMIN_TASK_REMOVE 349 architecture 355 data sharing environment 358 data sharing specifying a scheduler 339 synchronization 351 disabling tracing 352 enabling tracing 352 interacting 335 JCL jobs 364 lifecycle 356 listing scheduled tasks 347 listing task status 347 multi-threaded execution of tasks 362 overview 335 protecting resources 360 protecting the interface 360 recovering the task list 353 removing a task 348 sample schedule definitions 337 scheduling capabilities 335 secure execution of tasks 361 security 359 SQL code returned by stored procedure 354 SQL code returned by user-defined table function 354 starting 351 stopping 351 stored procedures execution 364 task did not execute 353 task execution 362
administrative task scheduler (continued) task execution in a data sharing environment 365 task lists 357 troubleshooting 352 Unicode restrictions 364 user roles 359 alias ownership 146 qualified name 146 ALL clause of GRANT statement 134 ALL PRIVILEGES clause GRANT statement 137 allocating space effect on INSERTs 664 preformatting 664 table 37 allocating storage dictionary 116 table 112 already-verified acceptance option 244 ALTER command of access method services 555 ALTER command, access method services FOR option 38 TO option 38 ALTER DATABASE statement usage 68 ALTER FUNCTION statement usage 96 ALTER privilege description 134 ALTER PROCEDURE statement 96 ALTER STOGROUP statement 67 ALTER TABLE statement AUDIT clause 288 description 71 ALTER TABLESPACE statement description 69 ALTERIN privilege description 136 ambiguous cursor 860, 1012 APPL statement options SECACPT 243 application plan controlling application connections 389 controlling use of DDL 215, 227 inoperative, when privilege is revoked 186 invalidated dropping a table 89 dropping a view 96 dropping an index 94 when privilege is revoked 186 list of dependent objects 90 monitoring 1203 privileges explicit 137 of ownership 146 retrieving catalog information 190 application program coding SQL statements data communication coding 16 error checking in IMS 329 internal integrity reports 298 recovery scenarios CICS 529 IMS 528
X-2
Administration Guide
application program (continued) running batch 329 CAF (call attachment facility) 330 CICS transactions 329 error recovery scenario 524, 525 IMS 328 RRSAF (Resource Recovery Services attachment facility) 331 TSO online 327 security measures in 153 suspension description 814 timeout periods 838 application programmer description 171 privileges 177 application registration table (ART) See registration tables for DDL archive log BSDS 441 data set changing high-level qualifier for 99 description 433 offloading 429 types 433 deleting 443 description 8 device type 433 dual logging 433 dynamic allocation of data sets 433 multivolume data sets 434 recovery scenario 539 retention period 443 SMS 434 writing 430 ARCHIVE LOG command cancels offloads 437 use 435 ARCHIVE LOG FREQ field of panel DSNTIPL 704 ARCHIVE privilege description 136 archiving to disk volumes 434 ARCHWTOR option of DSNZPxxx module 431 ART (application registration table) See registration tables for DDL ASUTIME column resource limit specification table (RLST) 726 asynchronous data from IFI 1178 attachment facility description 14 attachment request come-from check 248 controlling 243 definition 242 translating IDs 247, 260 using secondary IDs 249 AUDIT clause of ALTER TABLE statement 288 clause of CREATE TABLE statement 288 option of START TRACE command 286 audit trace class descriptions 287 controlling 286 description 285, 1198 records 290
auditing access attempts 285, 292 authorization IDs 290 classes of events 287 data 1198 description 127 in sample security plan attempted access 310 payroll data 304 payroll updates 307 reporting trace records 290 security measures in force 290 table access 288 trace data through IFI 1189 authority administrative 138 controlling access to CICS 329 DB2 catalog and directory 144 DB2 commands 323 DB2 functions 323 IMS application program 329 TSO application program 328 description 130, 133 explicitly granted 138, 145 hierarchy 138 level SYS for z/OS command group 320 levels 323 types DBADM 141 DBCTRL 140 DBMAINT 140 installation SYSADM 142 installation SYSOPR 140 PACKADM 140 SYSADM 142 SYSCTRL 141 SYSOPR 140 authorization data definition statements, to use 215 exit routines. See connection exit routine See sign-on exit routine authorization ID auditing 290 checking during thread creation 736 description 134 dynamic SQL, determining 164 exit routine input parameter 1058 inbound from remote location See remote request initial connection processing 233 sign-on processing 236 package execution 149 primary connection processing 233, 234 description 134 exit routine input 1058 privileges exercised by 161 sign-on processing 236, 237 retrieving catalog information 189 routine, determining 161 secondary attachment requests 249 connection processing 234 description 134 Index
X-3
authorization ID (continued) secondary (continued) exit routine output 1060, 1077 identifying RACF groups 272 ownership held by 147 privileges exercised by 161 sign-on processing 237 SQL changing 134 description 134 exit routine output 1060, 1077 privileges exercised by 161 system-directed access 150 translating inbound IDs 247 outbound IDs 260 verifying 243 automatic data management 481 deletion of archive log data sets 443 rebind EXPLAIN processing 941 restart function of z/OS 453 automatic query rewrite definition 885 description of process 895 determining occurrence of 900 enabling query optimization 893 examples 896 exploiting 893 introduction 885, 893 query requirements 895 auxiliary storage 29 auxiliary table LOCK TABLE statement 868 availability recovering data sets 495 page sets 495 recovery planning 475 summary of functions for 12 AVGKEYLEN column SYSINDEXES catalog table 905 SYSINDEXPART catalog table 906 AVGROWLEN column SYSTABLEPART catalog table 908 SYSTABLES catalog table data collected by RUNSTATS utility SYSTABLES_HIST catalog table 915 SYSTABLESPACE catalog table 909 AVGSIZE column SYSLOBSTATS catalog table 907
909
B
BACKOUT DURATION field of panel DSNTIPL backup data set using DFSMShsm 481 database concepts 475 DSN1COPY 502 image copies 493 planning 475 system procedures 475 BACKUP SYSTEM utility 37 backward log recovery phase recovery scenario 613 454
backward log recovery phase (continued) restart 452 base table distinctions from temporary tables 48 basic direct access method (BDAM) See BDAM (basic direct access method) basic sequential access method (BSAM) See BSAM (basic sequential access method) batch message processing (BMP) program See BMP (batch message processing) program batch processing TSO 329 BDAM (basic direct access method) 433 BIND PACKAGE subcommand of DSN options DISABLE 153 ENABLE 153 ISOLATION 850 OWNER 148 RELEASE 847 REOPT(ALWAYS) 779 REOPT(NONE) 779 REOPT(ONCE) 779 privileges for remote bind 154 BIND PLAN subcommand of DSN options ACQUIRE 847 DISABLE 153 ENABLE 153 ISOLATION 850 OWNER 148 RELEASE 847 REOPT(ALWAYS) 779 REOPT(NONE) 779 REOPT(ONCE) 779 BIND privilege description 137 BINDADD privilege description 136 BINDAGENT privilege description 136 naming plan or package owner 148 binding privileges needed 164 bit data altering subtype 88 blank column with a field procedure 1095 block fetch description 1009 enabling 1011 LOB data impact 1011 scrollable cursors 1011 BMP (batch message processing) program connecting from dependent regions 398 bootstrap data set (BSDS) See BSDS (bootstrap data set) BSAM (basic sequential access method) reading archive log data sets 433 BSDS (bootstrap data set) archive log information 442 changing high-level qualifier of 99 changing log inventory 442 defining 441 description 9 dual copies 441 dual recovery 543
X-4
Administration Guide
BSDS (bootstrap data set) (continued) failure symptoms 599 logging considerations 702 managing 441 recovery scenario 542, 616 registers log data 441 restart use 448 restoring from the archive log 543 single recovery 543 stand-alone log services role 1130 BSDS privilege description 136 buffer information area used in IFI 1161 buffer pool advantages of large pools 679 advantages of multiple pools 679 allocating storage 679 altering attributes 681 available pages 672 considerations 713 description 9 displaying current status 681 hit ratio 677 immediate writes 685 in-use pages 672 long-term page fix option 681 monitoring 683, 1202 page-stealing algorithm 680 read operations 672 size 678, 738 statistics 683 thresholds 673, 685 update efficiency 684 updated pages 672 use in logging 429 write efficiency 684 write operations 672 BUFFERPOOL clause ALTER INDEX statement 673 ALTER TABLESPACE statement 673 CREATE DATABASE statement 673 CREATE INDEX statement 673 CREATE TABLESPACE statement 673 BUFFERPOOL privilege description 138 built-in functions for encryption See encryption
C
cache dynamic SQL effect of RELEASE(DEALLOCATE) 848 cache controller 702 cache for authorization IDs 151 CAF (call attachment facility) application program running 330 submitting 331 description 18 DSNALI language interface module 1159 call attachment facility (CAF) See CAF (call attachment facility) CANCEL THREAD command CICS threads 390 disconnecting from TSO 387 use in controlling DB2 connections 415
capturing changed data altering a table for 87 CARDF column SYSCOLDIST catalog table access path selection 904 data collected by RUNSTATS utility 904 SYSCOLDIST_HIST catalog table 913 SYSINDEXPART catalog table data collected by RUNSTATS utility 906 SYSINDEXPART_HIST catalog table 914 SYSTABLEPART catalog table data collected by RUNSTATS utility 908 SYSTABLEPART_HIST catalog table 915 SYSTABLES catalog table data collected by RUNSTATS utility 909 SYSTABLES_HIST catalog table 915 SYSTABSTATS catalog table data collected by RUNSTATS utility 910 SYSTABSTATS_HIST catalog table 915 CARDINALITY column of SYSROUTINES catalog table cardinality of user-defined table function improving query performance 798 Cartesian join 962 catalog statistics history 913, 917 influencing access paths 804 catalog tables frequency of image copies 480 historical statistics 913, 917 retrieving information about multiple grants 188 plans and packages 190 privileges 187 routines 190 SYSCOLAUTH 187 SYSCOLDIST data collected by RUNSTATS utility 904 SYSCOLDIST_HIST 913 SYSCOLDISTSTATS data collected by RUNSTATS utility 904 SYSCOLSTATS data collected by RUNSTATS utility 904 SYSCOLUMNS column description of a value 1094 data collected by RUNSTATS utility 905 updated by ALTER TABLE statement 71 updated by DROP TABLE 89 SYSCOLUMNS_HIST 913 SYSCOPY discarding records 514 holds image copy information 484 image copy in log 1117 used by RECOVER utility 478 SYSDBAUTH 187 SYSINDEXES access path selection 920 data collected by RUNSTATS utility 905 dropping a table 90 SYSINDEXES_HIST 913 SYSINDEXPART data collected by RUNSTATS utility 906 space allocation information 41 SYSINDEXPART_HIST 914 SYSINDEXSTATS data collected by RUNSTATS utility 907 SYSINDEXSTATS_HIST 914
908
Index
X-5
catalog tables (continued) SYSLOBSTATS data collected by RUNSTATS utility 907 SYSLOBSTATS_HIST 914 SYSPACKAUTH 187 SYSPLANAUTH checked during thread creation 736 plan authorization 187 SYSPLANDEP 90 SYSRESAUTH 187 SYSROUTINES data collected by RUNSTATS utility 908 using SECURITY column of 277 SYSSTOGROUP storage groups 29 SYSSTRINGS establishing conversion procedure 1090 SYSSYNONYMS 89 SYSTABAUTH authorization information 187 dropping a table 90 SYSTABLEPART data collected by RUNSTATS utility 908 PAGESAVE column 711 table spaces associated with storage group 68 updated by LOAD and REORG utilities for data compression 711 SYSTABLEPART_HIST 915 SYSTABLES data collected by RUNSTATS utility 909 updated by ALTER TABLE statement 71 updated by DROP TABLE 89 updated by LOAD and REORG for data compression 711 SYSTABLES_HIST 915 SYSTABLESPACE data collected by RUNSTATS utility 909 implicitly created table spaces 45 SYSTABSTATS data collected by RUNSTATS utility 910 PCTROWCOMP column 711 SYSTABSTATS_HIST 915 SYSUSERAUTH 187 SYSVIEWDEP view dependencies 90 SYSVOLUMES 29 views of 191 catalog, DB2 authority for access 144 changing high-level qualifier 102 description 7 DSNDB06 database 484 locks 828 point-in-time recovery 498 recovery 498 recovery scenario 551 statistics production system 927 querying the catalog 920 tuning 699 CDB (communications database) backing up 477 changing high-level qualifier 102 description 8 updating tables 247 CHANGE command of IMS purging residual recovery entries 392
change log inventory utility changing BSDS 380, 442 control of data set access 282 change number of sessions (CNOS) See CNOS (change number of sessions) CHANGE SUBSYS command of IMS 397 CHARACTER data type altering 88 CHECK DATA utility checks referential constraints 297 CHECK INDEX utility checks consistency of indexes 297 checkpoint log records 1115, 1119 queue 458 CHECKPOINT FREQ field of panel DSNTIPN 704 CI (control interval) description 429, 433 reading 1130 CICS commands accessing databases 388 DSNC DISCONNECT 390 DSNC DISPLAY PLAN 390 DSNC DISPLAY TRANSACTION 390 DSNC STOP 391 response destination 322 used in DB2 environment 317 connecting to controlling 392 disconnecting applications 390 thread 390 connecting to DB2 authorization IDs 329 connection processing 233 controlling 387 disconnecting applications 425 sample authorization routines 235 security 279 sign-on processing 236 supplying secondary IDs 233 correlating DB2 and CICS accounting records 656 description, attachment facility 15 disconnecting from DB2 391 dynamic plan selection exit routine 1107 facilities 1107 diagnostic trace 424 monitoring facility (CMF) 648, 1191 tools 1193 language interface module (DSNCLI) IFI entry point 1159 running CICS applications 329 operating entering DB2 commands 321 identify outstanding indoubt units 465 recovery from system failure 16 terminates AEY9 535 planning DB2 considerations 15 environment 329 programming applications 329 recovery scenarios application failure 529 attachment facility failure 534
X-6
Administration Guide
CICS (continued) recovery scenarios (continued) CICS not operational 529 DB2 connection failure 530 indoubt resolution failure 531 starting a connection 388 statistics 1191 system administration 16 two-phase commit 459 XRF (extended recovery facility) 16 CICS transaction invocation stored procedure 1108 claim class 869 definition 869 effect of cursor WITH HOLD 861 Class 1 elapsed time 648 CLOSE clause of CREATE INDEX statement effect on virtual storage use 712 clause of CREATE TABLESPACE statement deferred close 697 effect on virtual storage use 712 closed application controlling access 215, 227 definition 215 cluster ratio description 921 effects low cluster ratio 921 table space scan 951 with list prefetch 975 CLUSTERED column of SYSINDEXES catalog table data collected by RUNSTATS utility 905 CLUSTERING column SYSINDEXES_HIST catalog table 914 CLUSTERING column of SYSINDEXES catalog table access path selection 905 CLUSTERRATIO column SYSINDEXSTATS_HIST catalog table 914 CLUSTERRATIOF column SYSINDEXES catalog table data collected by RUNSTATS utility 906 SYSINDEXES_HIST catalog table 914 SYSINDEXSTATS catalog table access path selection 907 CNOS (change number of sessions) failure 561 coding exit routines general rules 1108 parameters 1110 COLCARD column of SYSCOLSTATS catalog table data collected by RUNSTATS utility 905 updating 918 COLCARDDATA column of SYSCOLSTATS catalog table COLCARDF column SYSCOLUMNS catalog table 905 SYSCOLUMNS_HIST catalog table 913 COLCARDF column of SYSCOLUMNS catalog table statistics not exact 910 updating 918 cold start See also conditional restart bypassing the damaged log 598 recovery operations during 457 special situations 619
905
COLGROUPCOLNO column SYSCOLDIST catalog table access path selection 904 SYSCOLDIST_HIST catalog table 913 SYSCOLDISTSTATS catalog table data collected by RUNSTATS utility 904 collection, package administrator 171 privileges on 136 column adding to a table 71 description 6 dropping from a table 88 column description of a value 1094 column value descriptor (CVD) 1096 COLVALUE column SYSCOLDIST catalog table access path selection 904 SYSCOLDIST_HIST catalog table 913 SYSCOLDISTSTATS catalog table data collected by RUNSTATS utility 904 come-from check 248 command prefix messages 332 multi-character 320 usage 320 command recognition character (CRC) See CRC (command recognition character) commands concurrency 813, 868 entering 317 issuing DB2 commands from IFI 1160 operator 318, 324 prefixes 333 commit two-phase process 459 common SQL API XML message documents 1390 communications database (CDB) See CDB (communications database) compatibility locks 827 Complete mode common SQL API 1388 compressing data See data compression compression dictionary See data compression, dictionary concurrency commands 813, 868 contention independent of databases 829 control by drains and claims 868 control by locks 814 description 814 effect of ISOLATION options 855, 856 lock escalation 835 lock size 824 LOCKSIZE options 842 row locks 843 uncommitted read 854 recommendations 816 utilities 813, 868 utility compatibility 871 with real-time statistics 1260
Index
X-7
conditional restart control record backward log recovery failure 615 current status rebuild failure 607 forward log recovery failure 613 log initialization failure 607 wrap-around queue 458 description 455 excessive loss of active log data, restart procedure total loss of log, restart procedure 619 connection controlling CICS 387 controlling IMS 392 DB2 controlling commands 387 thread 749 displaying IMS activity 398 effect of lost, on restart 463 exit routine See connection exit routine exit routine. See connection exit routine IDs cited in message DSNR007I 450 outstanding unit of recovery 450 used by IMS 329 used to identify a unit of recovery 525 processing See connection processing requests exit point 1057 initial primary authorization ID 233 invoking RACF 233 local 232 VTAM 267 connection exit routine debugging 1063 default 233, 234 description 1055 performance considerations 1062 sample location 1056 provides secondary IDs 234, 1061 secondary authorization ID 234 using 233 writing 1055 connection processing choosing for remote requests 243 initial primary authorization ID 233, 1060 invoking RACF 233 supplying secondary IDs 234 usage 232 using exit routine 233 continuous block fetch See block fetch continuous operation recovering table spaces and data sets 495 recovery planning 12, 475 control interval (CI) See CI (control interval) control interval, sizing 28 control region, IMS 397 conversation acceptance option 243, 244 conversation-level security 243 conversion procedure description 1090
620
conversion procedure (continued) writing 1090 coordinator in multi-site update 471 in two-phase commit 459 copy pools creating 37 SMS construct 37 COPY privilege description 136 COPY utility backing up 502 copying data from table space 493 DFSMSdss concurrent copy 485, 495 effect on real-time statistics 1256 restoring data 502 COPY-pending status resetting 63 copying a DB2 subsystem 109 a package, privileges for 154, 164 a relational database 109 correlated subqueries 785 correlation ID CICS 532 duplicate 396, 533 identifier for connections from TSO 386 IMS 396 outstanding unit of recovery 450 RECOVER INDOUBT command 389, 395, 403 COST_CATEGORY_B column of RLST 727 CP processing, disabling parallel operations 675 CRC (command recognition character) description 321 CREATE DATABASE statement description 43 privileges required 164 CREATE GLOBAL TEMPORARY TABLE statement distinctions from base tables 48 CREATE IN privilege description 136 CREATE INDEX statement privileges required 164 USING clause 41 CREATE SCHEMA statement 58 CREATE STOGROUP statement description 29 privileges required 164 VOLUMES(*) attribute 30, 35 CREATE TABLE statement AUDIT clause 288 creating a table space implicitly 45 privileges required 164 test table 64 CREATE TABLESPACE statement creating a table space explicitly 44 deferring allocation of data sets 30 DEFINE NO clause 30, 44 DSSIZE option 42 privileges required 164 USING STOGROUP clause 30, 44 CREATE VIEW statement privileges required 164 CREATEALIAS privilege description 136 created temporary table distinctions from base tables 48
X-8
Administration Guide
created temporary table (continued) table space scan 951 CREATEDBA privilege description 136 CREATEDBC privilege description 136 CREATEIN privilege description 136 CREATESG privilege description 136 CREATETAB privilege description 136 CREATETMTAB privilege description 136 CREATETS privilege description 136 cron format 346 CS (cursor stability) claim class 869 distributed environment 850 drain lock 870 effect on locking 850 optimistic concurrency control 852 page and row locking 852 CURRENDATA option of BIND plan and package options differ 860 CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION special register 894 CURRENT REFRESH AGE special register 894 current status rebuild phase of restart 450 recovery scenario 599 CURRENTDATA option BIND PACKAGE subcommand enabling block fetch 1012 BIND PLAN subcommand 1012 cursor ambiguous 860, 1012 defined WITH HOLD, subsystem parameter to release locks 845 WITH HOLD claims 861 locks 861 Customer Information Control System (CICS) See CICS CVD (column value descriptor) 1096, 1098
D
damage, heuristic 468 data See also mixed data access control description 128 field-level 143 using option of START DB2 325 backing up 502 checking consistency of updates 297 coding conversion procedures 1090 date and time exit routines 1087 edit exit routines 1081 field procedures 1093 compression See data compression consistency ensuring 293
data (continued) consistency (continued) verifying 296, 299 definition control support See data definition control support effect of locks on integrity 814 encrypting 1081 improving access 931 loading into tables 61 moving 105 restoring 502 understanding access 931 DATA CAPTURE clause ALTER TABLE statement 87 data class assigning data sets to 42 SMS construct 42 data compression determining effectiveness 710 dictionary description 116, 709 estimating disk storage 116 estimating virtual storage 117 DSN1COMP utility 710 edit routine 1081 effect on log records 1116 Huffman 1081 logging 429 performance considerations 708 data definition control support bypassing 228 controlling by application name 220 application name with exceptions 221 object name 222 object name with exceptions 224 description 215 registration tables See registration tables for DDL restarting 228 stopping 228 Data Facility Product (DFSMSdfp) 107 data management threshold (DMTH) 674 data mirror 582 data set adding 557 adding groups to control 281 allocation and extension 737 backing up using DFSMS 495 changing high-level qualifier 98 closing 696 control over creating 283 controlling access 281 copying 493 DSMAX value 693 extending 555 generic profiles 281, 283 limit 693 managing defining your own 27 migrating to DFSMShsm 35 reasons for defining your own 37 reasons for using DB2 27 using access method services 38 using DB2 storage groups 27 using DFSMShsm 27, 35 with DB2 storage groups 27 Index
X-9
data set (continued) managing (continued) your own 37 monitoring I/O activity 699 naming convention 38 number 39 open 693, 737 partition of partitioned partitioning index 38 partitioned secondary index 38 partitioned table space 38 recovering using non-DB2 dump 508 renaming 490 table space, deferring allocation 30 Data Set Services (DFSMSdss) 107 data set, DB2-managed extending avoiding failure 31 conditions for 31 description 31 insufficient space 31 nonpartitioned spaces 31 partitioned spaces 31 primary space allocation 32 example 34 secondary space allocation 32 example 34 data sharing real-time statistics 1259 using IFI 1185 data structure hierarchy 4 types 3 data type altering 88 codes for numeric data 1114 subtypes 88 database access thread differences from allied threads 741 failure 560 security failures in 562 altering definition 68 design 67 backup copying data 493 planning 475 balancing 296 controlling access 404 creating 43 default database See default database (DSNDB04) description 5 dropping 68 DSNDB07 (work file database). See work file database implementing design 43 monitoring 369 operations that lock 43 page set control log records 1120 privileges administrator 171, 176 controller 176 description 136 ownership 146
database (continued) reasons for defining 43 recovery description 495 failure scenarios 546 planning 475 RECOVER TOCOPY 504 RECOVER TOLOGPOINT 504 RECOVER TORBA 505 starting 368 starting and stopping as unit 43 status information 369 stopping 375 TEMP for declared temporary tables 44 users who need their own 44 database controller privileges 176 database descriptor (DBD) See DBD (database descriptor) database exception table, log records exception states 1116 image copies of special table spaces 1116 LPL 1120 WEPR 1120 DataPropagator NonRelational (DPropNR) See DPropNR (Data Propagator DataRefresher 65 DATE FORMAT field of panel DSNTIPF 1088 date routine DATE FORMAT field at installation 1088 description 1087 LOCAL DATE LENGTH field at installation 1088 writing 1087 datetime exit routine for. See date routine See time routine format table 1087 DB2 Buffer Pool Analyzer description 1202 DB2 coded format for numeric data 1114 DB2 commands authority 323 authorized for SYSOPR 324 commands RECOVER INDOUBT 469 RESET INDOUBT 470 START DB2 325 START DDF 405 STOP DDF 421 STOP DDF MODE(SUSPEND) 405 description 318 destination of responses 322 entering from CICS 321 DSN session 328 IMS 320 TSO 321 z/OS 320 issuing from IFI 1160, 1162 users authorized to enter 323 DB2 Connect 18 DB2 data set statistics obtaining through IFCID 0199 1174 DB2 DataPropagator altering a table for 87 DB2 decoded procedure for numeric data 1114
X-10
Administration Guide
DB2 Interactive (DB2I) See DB2I (DB2 Interactive) DB2 Performance Monitor (DB2 PM) See DB2 Performance Expert DB2 PM (DB2 Performance Monitor) EXPLAIN 930 DB2 private protocol access description 1007 resource limit facility 732 DB2-managed objects, changing data set high-level qualifier 104 DB2I (DB2 Interactive) description 11, 327 panels description 17 used to connect from TSO 385 DBA (database administrator) description 171 sample privileges 176 DBADM authority description 141 DBCTRL authority description 140 DBD (database descriptor) contents 8 EDM pool 685, 687 freeing 739 load in EDM pool 737 using ACQUIRE(ALLOCATE) 736 locks on 829 use count 739 DBD01 directory table space contents 8 placement of data sets 698 quiescing 486 recovery after conditional restart 500 recovery information 484 DBFULTA0 (Fast Path Log Analysis Utility) 1191 DBMAINT authority description 140 DD limit See DSMAX DDCS (data definition control support) database 9 DDF (distributed data facility) block fetch 1009 controlling connections 404 description 18 resuming 405 suspending 405 DDL, controlling usage of See data definition control support deadlock description 815 detection scenarios 881 example 815 recommendation for avoiding 818 row vs. page locks 843 wait time calculation 839 with RELEASE(DEALLOCATE) 819 X00C90088 reason code in SQLCA 816 DEADLOCK TIME field of panel DSNTIPJ 837 DEADLOK option of START irlmproc command 836 decision, heuristic 468 DECLARE GLOBAL TEMPORARY TABLE statement distinctions from base tables 48
declared temporary table distinctions from base tables 48 DECRYPT_BIT 209, 210 DECRYPT_CHAR 209, 210 DECRYPT_DB 209, 210 decryption See encryption dedicated virtual memory pool 971 default database (DSNDB04) changing high-level qualifier 102 defining 5 DEFER ALL field of panel DSNTIPS 454 deferred close 693 deferred write threshold (DWQT) description 675 recommendation for LOBs 677 DEFINE CLUSTER command of access method services 663 DEFINE CLUSTER command, access method services defining extents 40 example of use 40 LINEAR option 40 REUSE option 40 SHAREOPTIONS 40 DEFINE command of access method services recreating table space 619 redefine user work file 497 DEFINE command, access method services FOR option 38 TO option 38 definer, description 155 DELETE command of access method services 619 statement validation routine 1085 DELETE privilege description 134 deleting archive logs 443 department sample table description 1036 dependent regions, disconnecting from 400 DFSLI000 (IMS language interface module) 329, 1159 DFSMS (Data Facility Storage Management Subsystem) archive log data sets 434 backup 495 concurrent copy backup 495 description 20 recovery 495 DFSMSdfp (Data Facility Product) 107 DFSMSdfp partitioned data set extended (PDSE) 20 DFSMSdss (Data Set Services) 107 DFSMShsm assigning indexes to data classes 35 assigning table spaces to data classes 35 migrating data sets 35 recalling archive logs 36 using with BACKUP SYSTEM utility 37 using with RECOVER utility 36 DFSMShsm (Data Facility Hierarchical Storage Manager) advantages 35 backup 481 moving data sets 107 recovery 481 DFSxxxx messages 332
Index
X-11
dictionary See data compression direct row access 946 directory authority for access 144 changing high-level qualifier 102 description 8 frequency of image copies 480 order of recovering I/O errors 551 point-in-time recovery 498 recovery 498 SYSLGRNX table discarding records 514 records log RBA ranges 484 table space names 8 DISABLE option limits plan and package use 153 disaster recovery data mirroring 582 preparation 487 scenario 563 using a tracker site 574 disaster, rolling 582 disconnecting applications 390, 392 CICS from DB2, command 388 DB2 from TSO 387 discretionary access checking 193 disk altering storage group assignment 67 data set, allocation and extension 707 improving utilization 707 requirements 111 DISPLAY command of IMS SUBSYS option 392, 398 DISPLAY DATABASE command displaying LPL entries 373 SPACENAM option 372 status checking 298 DISPLAY DDF command displays connections to DDF 406 DISPLAY FUNCTION SPECIFIC command displaying statistics about external user-defined functions 378 DISPLAY LOCATION command controls connections to DDF 407 DISPLAY NET command of VTAM 416 DISPLAY OASN command of IMS displaying RREs 397 produces OASN 528 DISPLAY privilege description 136 DISPLAY PROCEDURE command example 417 DISPLAY THREAD command extensions to control DDF connections DETAIL option 410 LOCATION option 409 LUWID option 413 messages issued 384 options DETAIL 410 LOCATION 409 LUWID 413 TYPE (INDOUBT) 532 shows IMS threads 393, 398
DISPLAY THREAD command (continued) shows parallel tasks 1002 DISPLAY TRACE command AUDIT option 286 DISPLAY UTILITY command data set control log record 1115 DISPLAYDB privilege description 136 displaying buffer pool information 681 indoubt units of recovery 394, 532 information about originating threads 386 parallel threads 386 postponed units of recovery 395 distinct type privileges of ownership 146 DISTINCT TYPE privilege, description 138 distributed data controlling connections 404 DB2 private protocol access See DB2 private protocol DRDA protocol See DRDA access operating displaying status 1174 in an overloaded network 636 performance considerations 1008 programming block fetch 1009 FOR FETCH ONLY 1011 FOR READ ONLY 1011 resource limit facility 731 server-elapsed time monitoring 1021 tuning 1008 distributed data facility (DDF) See DDF (distributed data facility) Distributed Relational Database Architecture (DRDA) 18 distribution statistics 919 DL/I batch features 17 loading data 65 DL/I BATCH TIMEOUT field of installation panel DSNTIPI 838 DMTH (data management threshold) 674 double-hop situation 150 down-level detection controlling 549 LEVEL UPDATE FREQ field of panel DSNTIPN 549 down-level page sets 548 DPropNR (DataPropagator NonRelational) 17 DPSI performance considerations 792 drain definition 869 DRAIN ALL 872 wait calculation 840 drain lock description 813, 870 types 870 wait calculation 840 DRDA access description 1007 resource limit facility 732 security mechanisms 238
X-12
Administration Guide
DROP statement TABLE 89 TABLESPACE 69 DROP privilege description 136 DROPIN privilege description 136 dropping columns from a table 88 database 68 privileges needed for package 164 table spaces 69 tables 89 views 96 volumes from a storage group 67 DSMAX calculating 694 limit on open data sets 693 DSN command of TSO command processor connecting from TSO 385 description 18 invoked by TSO batch work 330 invoking 17 issues commands 328 running TSO programs 327 subcommands END 387 DSN command processor See DSN command of TSO DSN message prefix 331 DSN_PREDICAT_TABLE 1205 DSN_STATEMNT_TABLE table column descriptions 986 DSN1CHKR utility control of data set access 282 DSN1COMP utility description 710 DSN1COPY utility control of data set access 282 resetting log RBA 627 restoring data 502 DSN1LOGP utility control of data set access 282 example 607 extract log records 1115 JCL sample 604 limitations 624 print log records 1115 shows lost work 597 DSN1PRNT utility description 282 DSN3@ATH connection exit routine. See connection exit routine DSN3@SGN sign-on exit routine. See sign-on exit routine DSN6SPRM macro RELCURHL parameter 845 DSN6SYSP macro PCLOSEN parameter 697 PCLOSET parameter 697 DSN8EAE1 exit routine 1081 DSN8HUFF edit routine 1081 DSNACCOR stored procedure description 1263
DSNACCOR stored procedure (continued) example call 1277 option descriptions 1265 output 1281 syntax diagram 1265 DSNACICS stored procedure debugging 1294 description 1287 invocation example 1292 invocation syntax 1288 output 1294 parameter descriptions 1288 restrictions 1294 DSNACICX user exit routine description 1290 parameter list 1291 rules for writing 1290 DSNAEXP stored procedure description 1306 example call 1308 option descriptions 1307 output 1309 syntax diagram 1307 DSNAIMS option descriptions 1298 syntax diagram 1298 DSNAIMS stored procedure description 1298 examples 1301 DSNAIMS2 option descriptions 1303 syntax diagram 1302 DSNAIMS2 stored procedure description 1302 examples 1305 DSNALI (CAF language interface module) inserting 1159 DSNC command of CICS destination 323 prefix 333 DSNC DISCONNECT command of CICS description 390 terminate DB2 threads 387 DSNC DISPLAY command of CICS description 387 DSNC DISPLAY PLAN 390 DSNC DISPLAY TRANSACTION 390 DSNC STOP command of CICS stop DB2 connection to CICS 387 DSNC STRT command of CICS example 388 processing 390 start DB2 connection to CICS 387 DSNC transaction code entering DB2 commands 321 DSNCLI (CICS language interface module) entry point 1159 running CICS applications 329 DSNCRCT (resource control table) See RCT (resource control table) DSNDAIDL mapping macro 1058 DSNDB01 database authority for access 144 DSNDB04 default database See default database (DSNDB04) DSNDB06 database authority for access 144 Index
X-13
DSNDB06 database (continued) changing high-level qualifier 102 DSNDB07 database. See work file database DSNDDTXP mapping macro 1089 DSNDEDIT mapping macro 1082 DSNDEXPL mapping macro 1110 DSNDFPPB mapping macro 1096 DSNDIFCA mapping macro 1181 DSNDQWIW mapping macro 1187 DSNDROW mapping macro 1112 DSNDRVAL mapping macro 1085 DSNDSLRB mapping macro 1130 DSNDSLRF mapping macro 1136 DSNDWBUF mapping macro 1161 DSNDWQAL mapping macro 1165 DSNDXAPL parameter list 1070 DSNELI (TSO language interface module) 327, 1159 DSNJSLR macro capturing log records 1115 stand-alone CLOSE 1136 stand-alone sample program 1137 DSNLEUSR Encrypting inbound IDs 249 Encrypting outbound IDs 262 Encrypting passwords 262 DSNLEUSR stored procedure description 1295 example call 1296 option descriptions 1295 output 1297 syntax diagram 1295 DSNMxxx messages 332 DSNTEJ1S job 59 DSNTESP data set 924 DSNTIJEX job exit routines 1056 DSNTIJIC job improving recovery of inconsistent data 490 DSNTIJSG job installation 723 DSNUM column SYSINDEXPART catalog table data collected by RUNSTATS utility 906 SYSINDEXPART_HIST catalog table 914 SYSTABLEPART catalog table data collected by RUNSTATS utility 908 SYSTABLEPART_HIST catalog table 915 DSNX@XAC access control authorization exit routine 1065 DSNZPxxx subsystem parameters module specifying an alternate 325 dual logging active log 430 archive logs 433 description 9 restoring 441 synchronization 431 dump caution about using disk dump and restore 496 duration of locks controlling 847 description 824 DWQT option of ALTER BUFFERPOOL command 675 dynamic plan selection exit routine 1107
dynamic plan selection in CICS exit routine. See plan selection exit routine dynamic prefetch description 974 dynamic SQL authorization 164 caching effect of RELEASE bind option example 168 privileges required 164 skeletons, EDM pool 685 DYNAMICRULES description 164 example 168
848
E
EA-enabled page sets 42 EAV (Extended Address Volumes) 716 edit procedure, changing 87 edit routine description 294, 1081 ensuring data accuracy 294 row formats 1110 specified by EDITPROC option 1081 writing 1081 EDITPROC clause exit points 1082 specifies edit exit routine 1082 EDM pool DBD freeing 739 description 685 EDPROC column of SYSTABLES catalog tabley 909 employee photo and resume sample table 1041 employee sample table 1038 employee-to-project-activity sample table 1044 ENABLE option of BIND PLAN subcommand 153 enclave 745 ENCRYPT 209, 210 encrypting data 1081 passwords from workstation 263 passwords on attachment requests 244, 262 encryption 207 built-in functions for 207 column level 208 data 207 defining columns for 208 non-character values 211 password hints 210, 211 performance recommendations 212 predicate evaluation 211 value level 210 with viewsl 209 END subcommand of DSN disconnecting from TSO 387 Enterprise Storage Server backup 495 environment, operating CICS 329 DB2 19 IMS 329 TSO 327 z/OS 19
X-14
Administration Guide
ERRDEST option DSNC MODIFY 388 unsolicited CICS messages 332 error application program 524 IFI (instrumentation facility interface) 1190 physical RW 373 SQL query 297 escalation, lock 833 escape character example 223 in DDL registration tables 219 EVALUATE UNCOMMITTED field of panel DSNTIP4 EXCLUSIVE lock mode effect on resources 826 LOB 866 page 825 row 825 table, partition, and table space 825 EXECUTE privilege after BIND REPLACE 153 description 134, 137 effect 148 exit parameter list (EXPL) 1110 exit point authorization routines 1057 connection routine 1057 conversion procedure 1091 date and time routines 1088 edit routine 1082 field procedure 1095 sign-on routine 1057 validation routine 1085 exit routine authorization control 1065 determining if active 1080 DSNACICX 1290 general considerations 1108 writing 1055 exit routine. See also connection exit routine See also conversion procedure See also date routine See also edit routine See also field procedure See also log capture exit routine See also sign-on exit routine See also time routine See validation routine EXPL (exit parameter list) 1110 EXPLAIN 1205 report of outer join 961 statement alternative using IFI 1158 description 931 executing under DB2 QMF 941 index scans 945 interpreting output 943 investigating SQL processing 931 EXPLAIN PROCESSING field of panel DSNTIPO overhead 941 EXPLAIN tables 1205, 1228, 1230 DSN_DETCOST_TABLE 1218 DSN_FILTER_TABLE 1216 DSN_PGROUP_TABLE 1211 DSN_PTASK_TABLE 1214
846
EXPLAIN tables (continued) DSN_QUERY_TABLE 1231 DSN_SORT_TABLE 1224 DSN_SORTKEY_TABLE 1226 DSN_STRUCT_TABLE 1209 EXPORT command of access method services 107, 487 Extended Address Volumes (EAV) 716 Extended Remote Copy (XRC) 587 EXTENDED SECURITY field of panel DSNTIPR 239 extending a data set, procedure 555 EXTENTS column SYSINDEXPART catalog table data collected by RUNSTATS utility 906 SYSINDEXPART_HIST catalog table 914 SYSTABLEPART catalog table data collected by RUNSTATS utility 908 SYSTABLEPART_HIST catalog table 915 external storage See auxiliary storage
F
failure symptoms abend shows log problem during restart 613 restart failed 599, 608 BSDS 599 CICS attachment abends 530 task abends 534 waits 529 IMS loops 526 waits 526 log 599 lost log information 619 message DFH2206 529 DFS555 528 DSNB207I 546 DSNJ 616 DSNJ001I 542 DSNJ004I 537 DSNJ100 616 DSNJ103I 539 DSNJ105I 537 DSNJ106I 537 DSNJ107 616 DSNJ110E 536 DSNJ111E 536 DSNJ114I 540 DSNM002I 526 DSNM004I 526 DSNM005I 527 DSNM3201I 529 DSNP007I 554 DSNP012I 552 DSNU086I 550, 551 no processing is occurring 522 subsystem termination 534 z/OS error recovery program message 540 FARINDREF column SYSTABLEPART_HIST catalog table 915 FARINDREF column of SYSTABLEPART catalog table data collected by RUNSTATS utility 908 FAROFFPOSF column SYSINDEXPART_HIST catalog table 914 Index
X-15
FAROFFPOSF column of SYSINDEXPART catalog table data collected by RUNSTATS utility 906 fast copy function Enterprise Storage Server FlashCopy 495 RVA SnapShot 495 fast log apply use during RECOVER processing 492 Fast Path Log Analysis Utility 1191 FETCH FIRST n ROWS ONLY clause effect on OPTIMIZE clause 795 FETCH FIRST n ROW ONLY clause effect on distributed performance 1015 effect on OPTIMIZE clause 1015 field decoding operation definition 1093 input 1103 output 1103 field definition operation definition 1093 input 1099 output 1099 field description of a value 1094 field encoding operation definition 1093 input 1102 output 1102 field procedure changing 87 description 294, 1093 ensuring data accuracy 294 specified by the FIELDPROC clause 1094 writing 1093 field procedure information block (FPIB) 1097 field procedure parameter list (FPPL) 1096 field procedure parameter value list (FPPVL) 1096 field value descriptor (FVD) 1096 field-level access control 143 FIELDPROC clause ALTER TABLE statement 1094 CREATE TABLE statement 1094 filter factor catalog statistics used for determining 910 predicate 766 FIRSTKEYCARD column SYSINDEXSTATS catalog table recommendation for updating 918 FIRSTKEYCARDF column SYSINDEXES catalog table data collected by RUNSTATS utility 906 recommendation for updating 918 SYSINDEXES_HIST catalog table 914 SYSINDEXSTATS catalog table data collected by RUNSTATS utility 907 SYSINDEXSTATS_HIST catalog table 914 fixed-length records, effect on processor resources 667 FORCE option START DATABASE command 369 STOP DB2 command 400, 448 format column 1112 data passed to FPPVL 1098 data set names 38 message 331 recovery log record 1123 row 1112 value descriptors 1091, 1099
forward log recovery phase of restart 451 scenario 608 FPIB (field procedure information block) 1097, 1098 FPPL (field procedure parameter list) 1096 FPPVL (field procedure parameter value list) 1096, 1098 FREE PACKAGE subcommand of DSN privileges needed 164 FREE PLAN subcommand of DSN privileges needed 164 free space description 658 recommendations 659 FREEPAGE clause of ALTER INDEX statement effect on DB2 speed 658 clause of ALTER TABLESPACE statement effect on DB2 speed 658 clause of CREATE INDEX statement effect on DB2 speed 658 clause of CREATE TABLESPACE statement effect on DB2 speed 658 FREESPACE column SYSLOBSTATS catalog table 907 SYSLOBSTATS_HIST catalog table 914 FREQUENCYF column SYSCOLDIST catalog table access path selection 904 SYSCOLDIST_HIST catalog table 913 SYSCOLDISTSTATS catalog table 904 full image copy use after LOAD 705 use after REORG 705 FULLKEYCARDDATA column SYSINDEXSTATS catalog table 907 FULLKEYCARDF column SYSINDEXES catalog table data collected by RUNSTATS utility 906 SYSINDEXES_HIST catalog table 914 SYSINDEXSTATS catalog table 907 SYSINDEXSTATS_HIST catalog table 914 function column when evaluated 950 FUNCTION privilege, description 136 function, user-defined 154 FVD (field value descriptor) 1096, 1098
G
generalized trace facility (GTF). See GTF (generalized trace facility) GET_CONFIG stored procedure 1391 filtering output 1390 GET_MESSAGE stored procedure 1410 filtering output 1390 GET_SYSTEM_INFO stored procedure 1418 filtering output 1390 GETHINT 210 global transaction definition of 465 glossary 1441 governor (resource limit facility) See resource limit facility (governor) GRANT statement examples 174, 180 format 174
X-16
Administration Guide
GRANT statement (continued) privileges required 164 granting privileges and authorities 174 GROUP BY clause effect on OPTIMIZE clause 796 GROUP DD statement for stand-alone log services OPEN request 1131 GTF (generalized trace facility) event identifiers 1201 format of trace records 1139 interpreting trace records 1144 recording trace records 1201
H
help DB2 UTILITIES panel 11 heuristic damage 468 heuristic decision 468 Hierarchical Storage Manager (DFSMShsm) See DFSMShsm (Hierarchical Storage Manager) HIGH2KEY column SYSCOLSTATS catalog table 905 SYSCOLUMNS catalog table access path selection 905 recommendation for updating 919 SYSCOLUMNS_HIST catalog table 913 HIGHKEY column of SYSCOLSTATS catalog table 905 hints, optimization 806 HMIGRATE command of DFSMShsm (Hierarchical Storage Manager) 107 hop situation 151 host variable example query 779 impact on access path selection 779 in equal predicate 782 tuning queries 779 HRECALL command of DFSMShsm (Hierarchical Storage Manager) 107 Huffman compression. See also data compression, Huffman exit routine 1081 hybrid join description 965 disabling 689
I
I/O activity, monitoring by data set 699 I/O error catalog 551 directory 551 occurrence 441 table spaces 550 I/O processing minimizing contention 663, 716 parallel disabling 675 queries 993 identity column altering attributes 88 loading data into 62 identity columns conditional restart 456 IEFSSNxx member of SYS1.PARMLIB IRLM 380
IFCA (instrumentation facility communication area) command request 1161 description 1180 field descriptions 1181 IFI READS request 1163 READA request of IFI 1177 WRITE request of IFI 1179 IFCID (instrumentation facility component identifier) 0199 683, 699 0330 431, 536 area description 1185 READS request of IFI 1164 WRITE request of IFI 1179 description 1140 identifiers by number 0001 1017, 1173, 1196 0002 1174 0015 736 0021 738 0032 738 0033 738 0038 738 0039 738 0058 737 0070 738 0073 736 0084 738 0088 739 0089 739 0106 1174 0124 1174 0147 1174, 1198 0148 1174 0149 1174 0150 1174 0185 1174 0199 1174 0202 1174 0217 1174 0221 1003 0222 1003 0225 1174 0230 1174 0234 1174 0254 1174 0258 708 0306 1127, 1175 0314 1080 0316 1175 0317 1175 SMF type 1196, 1198 IFI (instrumentation facility interface) asynchronous data 1178 auditing data 1189 authorization 1163 buffer information area 1161 collecting trace data, example 1158 command request, output example 1188 commands READA 1177 READS 1162, 1164 data integrity 1189 data sharing group, in a 1185 decompressing log data 1127 dynamic statement cache information 1175 errors 1190 Index
X-17
IFI (instrumentation facility interface) (continued) issuing DB2 commands example 1162 syntax 1160 locking 1189 output area command request 1161 description 1185 example 1162 passing data to DB2, example 1158 qualification area 1165 READS output 1187 READS request 1164 recovery considerations 1190 return area command request 1161 description 1184 READA request 1177 READS request 1163 storage requirements 1164, 1177 summary of functions 1158 synchronous data 1173 using stored procedures 1160 WRITE 1179 writer header 1187 ii1 1887 modify bullets ** 10/16/02 635 IMAGCOPY privilege description 136 image copy catalog 480 directory 480 frequency vs. recovery speed 480 full use after LOAD 705 incremental frequency 480 making after loading a table 63 recovery speed 480 immediate write threshold (IWTH) 674 implementor, description 155 IMPORT command of access method services 107, 617 IMS commands CHANGE SUBSYS 392, 397 DISPLAY OASN 397 DISPLAY SUBSYS 392, 398 response destination 323 START REGION 400 START SUBSYS 392 STOP REGION 400 STOP SUBSYS 392, 400 TRACE SUBSYS 392 used in DB2 environment 317 connecting to DB2 attachment facility 397 authorization IDs 329 connection ID 329 connection processing 233 controlling 16, 392 dependent region connections 397 disconnecting applications 400 security 279 sign-on processing 236 supplying secondary IDs 233 facilities Fast Path 749 message format 332
IMS (continued) facilities (continued) processing limit 722 regions 748 tools 1193 indoubt units of recovery 464 language interface module (DFSLI000) IFI applications 1159 link-editing 329 LTERM authorization ID for message-driven regions 329 shown by /DISPLAY SUBSYS 398 used with GRANT 324 operating batch work 329 entering DB2 commands 320 recovery from system failure 16 running programs 328 tracing 424 planning design recommendations 748 environment 329 programming application 16 error checking 329 recovery resolution of indoubt units of recovery 464 recovery scenarios 525, 526 system administration 17 thread 393, 394 two-phase commit 459 using with DB2 16 IMS BMP TIMEOUT field of panel DSNTIPI 838 IMS Performance Analyzer (IMS PA) description 1191 IMS transit times 648 IMS transactions stored procedure multiple connections 1301, 1306 option descriptions 1298, 1303 syntax diagram 1298, 1302 IMS.PROCLIB library connecting from dependent regions 397 inactive connections 742 index access methods access path selection 954 by nonmatching index 955 description 952 IN-list index scan 955 matching index columns 945 matching index description 954 multiple 956 one-fetch index scan 957 altering ALTER INDEX statement 92 effects of dropping 94 backward index scan 58 copying 493 costs 952 description 6 evaluating effectiveness 711 implementing 54 locking 828 NOT PADDED advantages of using 57 disadvantages of using 57 index-only access 57
X-18
Administration Guide
index (continued) NOT PADDED (continued) varying-length column 57 ordered data backward scan 58 forward scan 58 ownership 146 privileges of ownership 146 reasons for using 952 reorganizing 95 space description 6 estimating size 118, 119 recovery scenario 550 storage allocated 41 structure index tree 117 leaf pages 117 overall 118 root page 117 subpages 117 types 54 clustering 55 data-partitioned secondary 56 nonpartitioned secondary 56 partitioning 55 secondary 56 unique 54 versions 94 recycling version numbers 95 INDEX privilege description 134 index structure root page leaf pages 117 index-only access NOT PADDED attribute 57 INDEXSPACESTATS contents 1243 real-time statistics table 1236 indoubt thread displaying information about 468 recovering 469 resetting status 469 resolving 588 information center consultant 171 INITIAL_INSTS column of SYSROUTINES catalog table 908 INITIAL_IOS column of SYSROUTINES catalog table 908 INLISTP 806 INSERT privilege description 134 INSERT processing, effect of MEMBER CLUSTER option of CREATE TABLESPACE 817 INSERT statement example 64 load data 61, 63 installation macros automatic IRLM start 381 installation SYSADM authority privileges 142 use of RACF profiles 283 installation SYSOPR authority privilege 140 use of RACF profiles 283 instrumentation facility communication area (IFCA) See IFCA (instrumentation facility communication area)
instrumentation facility interface (IFI) See IFI (instrumentation facility interface) INSTS_PER_INVOC column of SYSROUTINES catalog table 908 integrated catalog facility changing alias name for DB2 data sets 98 controlling storage 29 integrity IFI data 1189 reports 298 INTENT EXCLUSIVE lock mode 826, 866 INTENT SHARE lock mode 826, 866 Interactive System Productivity Facility (ISPF) See ISPF (Interactive System Productivity Facility) internal resource lock manager (IRLM) See IRLM (internal resource lock manager) invalid LOB, recovering 550 invoker, description 155 invoking DSN command processor 17 IOS_PER_INVOC column of SYSROUTINES catalog table IRLM administering 14 description 14 IRLM (internal resource lock manager) controlling 380 diagnostic trace 424 element name global mode 383 local mode 383 failure 521 IFI trace records 1173 monitoring connection 381, 382 MVS dispatching priority 817 OMEGAMON locking report 878 recovery scenario 521 starting automatically 325, 381 manually 381 startup procedure options 836 stopping 382 trace options, effect on performance 667 workload manager 718 ISOLATION option of BIND PLAN subcommand effects on locks 850 isolation level control by SQL statement example 861 recommendations 819 ISPF (Interactive System Productivity Facility) DB2 considerations 18 requirement 17 system administration 18 tutorial panels 11 IWTH (immediate write threshold) 674
908
J
JAR 146 privileges of ownership 146 Java class for a routine 146 privileges of ownership 146 Java class privilege description 138 JCL jobs scheduled execution 364 Index
X-19
join operation Cartesian 962 description 959 hybrid description 965 disabling 689 join sequence 967 merge scan 963 nested loop 961 star join 967 star schema 967 join sequence definition 758
K
key adding 78 dropping 79 foreign 78, 79 parent 78, 79 unique 78, 79 KEYCARDDATA column SYSCOLDIST catalog table data collected by RUNSTATS utility 904 KEYCOUNTF column SYSINDEXSTATS catalog table 907 SYSINDEXSTATS_HIST catalog table 914
L
language interface modules DFSLI000 1159 DSNALI 1159 DSNCLI description 1159 usage 329 DSNELI 1159 latch 813 LCID (log control interval definition) 1122 LE tokens 713 leaf page description 117 index 117 LEAFDIST column SYSINDEXPART catalog table data collected by RUNSTATS utility 906 SYSINDEXPART_HIST catalog table 914 LEAFFAR column SYSINDEXPART catalog table 906 example 925 SYSINDEXPART_HIST catalog table 914 LEAFNEAR column SYSINDEXPART catalog table 907 SYSINDEXPART_HIST catalog table 914 level of a lock 821 LEVEL UPDATE FREQ field of panel DSNTIPN 549 LIMIT BACKOUT field of panel DSNTIPN 454 limited block fetch See block fetch limited partition scan 948 LIMITKEY column SYSINDEXPART catalog table 907 list prefetch description 974 disabling 689
list prefetch (continued) thresholds 975 LOAD privilege description 135 LOAD utility availability of tables when using 62 CCSID option 62 delimited files 62 effect on real-time statistics 1251 loading DB2 tables 61 making corrections 63 moving data 106 loading data DL/I 65 sequential data sets 61 SQL INSERT statement 63 tables 61 LOB lock concurrency with UR readers 857 description 864 LOB (large object) block fetching 1011 lock duration 866 LOCK TABLE statement 868 locking 864 LOCKSIZE clause of CREATE or ALTER TABLESPACE 868 modes of LOB locks 866 modes of table space locks 866 recommendations for buffer pool DWQT threshold recovering invalid 550 when to reorganize 927 local attachment request 242 LOCAL DATE LENGTH field of panel DSNTIPF 1088 LOCAL TIME LENGTH field of panel DSNTIPF 1088 lock avoidance 846, 859 benefits 814 class drain 813 transaction 813 compatibility 827 DB2 installation options 837 description 813 drain description 870 types 870 wait calculation 840 duration controlling 847 description 824 LOBs 866 page locks 738 effect of cursor WITH HOLD 861 effects deadlock 815 deadlock wait calculation 839 suspension 814 timeout 815 timeout periods 838 escalation description 833 OMEGAMON reports 875
677
X-20
Administration Guide
lock (continued) hierarchy description 821 LOB locks 864 LOB table space, LOCKSIZE clause 868 maximum number 841 mode 825 modes for various processes 835 object DB2 catalog 828 DBD 829 description 827 indexes 828 LOCKMAX clause 843 LOCKSIZE clause 842 SKCT (skeleton cursor table) 829 SKPT (skeleton package table) 829 options affecting bind 847 cursor stability 852 IFI (instrumentation facility interface) 1189 IRLM 836 program 847 read stability 855 repeatable read 856 uncommitted read 854 page locks commit duration 738 CS, RS, and RR compared 856 description 821 performance 875 promotion 833 recommendations for concurrency 816 row locks compared to page 843 size controlling 842, 843 page 821 partition 821 table 821 table space 821 storage needed 837 suspension time 878 table of modes acquired 830 trace records 737 LOCK TABLE statement effect on auxiliary tables 868 effect on locks 863 lock/latch suspension time 649 LOCKMAX clause effect of options 843 LOCKPART clause of CREATE and ALTER TABLESPACE effect on locking 822 LOCKS PER TABLE(SPACE) field of panel DSNTIPJ 844 LOCKS PER USER field of panel DSNTIPJ 841 LOCKSIZE clause effect of options 842, 868 recommendations 817 log buffer creating log records 429 retrieving log records 429 size 700 capture exit routine 1115, 1138 changing BSDS inventory 442 checkpoint records 1119 contents 1115
log (continued) deciding how long to keep 443 determining size of active logs 704 dual active copy 430 archive logs 442 synchronization 431 to minimize restart effort 616 effects of data compression 1116 excessive loss 619 failure recovery scenario 535, 539 symptoms 599 total loss 619 hierarchy 429 implementing logging 434 initialization phase failure scenario 599 process 449 operation 298 performance considerations 700 recommendations 701 reading without running RECOVER utility 493 record structure control interval definition (LCID) 1122 database page set control records 1120 format 1123 header (LRH) 1115, 1121 logical 1120 physical 1120 type codes 1124 truncation 607 use backward recovery 452 establishing 429 exit routine 1105 forward recovery 451 managing 427, 480 monitoring 701 record retrieval 429 recovery scenario 616 restarting 448, 453 write threshold 700, 701 log capture exit routine contents of log 1115 reading log records 1138 writing 1105 log capture exit routine. See also DATA CAPTURE clause description 1105 log range directory 8 log record header (LRH) 1121 log record sequence number (LRSN) 1115 log write, forced at commit 701 logical page list (LPL) See LPL (logical page list) LOW2KEY column SYSCOLSTATS catalog table 905 SYSCOLUMNS catalog table access path selection 905 recommendation for updating 919 SYSCOLUMNS_HIST catalog table 913 LOWKEY column of SYSCOLSTATS catalog table 905 LPL option of DISPLAY DATABASE command 373 status in DISPLAY DATABASE output 373 Index
X-21
LPL (logical page list) deferred restart 454 description 373 recovering pages methods 374 running utilities on objects 374 LRH (log record header) 1121 LRSN statement of stand-alone log services OPEN request 1133
M
mandatory access checking definition 193 dominance 193 mapping macro DSNDAIDL 1058 DSNDDTXP 1089 DSNDEDIT 1082 DSNDEXPL 1110 DSNDFPPB 1096 DSNDIFCA 1181 DSNDQWIW 1187 DSNDROW 1112 DSNDRVAL 1085 DSNDSLRB 1130 DSNDSLRF 1136 DSNDWBUF 1161 DSNDWQAL 1165 mass delete contends with UR process 857 validation routine 1085 materialization outer join 961 views and nested table expressions 980 materialized query table 888 altering 890 changing 85 changing attributes 86 changing definition of 86 changing to a base table 86 creating 887 defining 887 definition 885 design guidelines 901 enabling for automatic query rewrite 893 examples in automatic query rewrite 896, 902 introduction 885 maintaining 890 populating 890 refresh age 894 refreshing 890 registering 85 registering an existing table as 85 statistics 892 system-maintained 888 use in automatic query rewrite 895 user-maintained 888 MAX BATCH CONNECT field of panel DSNTIPE 749 MAX REMOTE ACTIVE field of panel DSNTIPE 741 MAX REMOTE CONNECTED field of panel DSNTIPE 741 MAX TSO CONNECT field of panel DSNTIPE 749 MAXCSA option of START irlmproc command 836 MEMBER CLUSTER option of CREATE TABLESPACE 817 merge processing views or nested table expressions 979
message DSNJ106I 611 format DB2 331 IMS 332 prefix for DB2 331 receiving subsystem 331 z/OS abend IEC030I 540, 541 IEC031I 540 IEC032I 540 message by identifier $HASP373 325 DFS058 392 DFS058I 400 DFS3602I 528 DFS3613I 393 DFS554I 529 DFS555A 528 DFS555I 529 DSN1150I 614 DSN1157I 607, 614 DSN1160I 607, 615 DSN1162I 607, 614 DSN1213I 621 DSN2001I 531 DSN2025I 534 DSN2034I 531 DSN2035I 531 DSN2036I 531 DSN3100I 324, 326, 534 DSN3104I 326, 534 DSN3201I 530 DSN9032I 405 DSNB204I 546 DSNB207I 546 DSNB232I 548 DSNBB440I 1002 DSNC012I 391 DSNC016I 465 DSNC025I 391 DSNI006I 374 DSNI021I 374 DSNI103I 834 DSNJ001I 325, 432, 450, 598, 599 DSNJ002I 432 DSNJ003I 432, 543 DSNJ004I 432, 537 DSNJ005I 432 DSNJ007I 601, 604, 612 DSNJ008E 431 DSNJ012I 601, 602, 610 DSNJ072E 434 DSNJ099I 325 DSNJ100I 542, 598, 616 DSNJ103I 539, 601, 603, 610, 612 DSNJ104I 539, 601 DSNJ105I 537 DSNJ106I 537, 601, 602, 610 DSNJ107I 542, 598, 616 DSNJ108I 542 DSNJ110E 431, 536 DSNJ111E 431, 536 DSNJ113E 601, 603, 610, 611, 616 DSNJ114I 540 DSNJ115I 539 DSNJ1191 598
X-22
Administration Guide
message by identifier (continued) DSNJ119I 616 DSNJ120I 449, 543 DSNJ123E 542 DSNJ124I 538 DSNJ125I 442, 542 DSNJ126I 542 DSNJ127I 325 DSNJ128I 540 DSNJ130I 449 DSNJ139I 432 DSNJ301I 542 DSNJ302I 542 DSNJ303I 542 DSNJ304I 542 DSNJ305I 542 DSNJ306I 542 DSNJ307I 542 DSNJ311E 436 DSNJ312I 437 DSNJ317I 436 DSNJ318I 437 DSNJ319I 437 DSNL001I 405 DSNL002I 422 DSNL003I 405 DSNL004I 405 DSNL005I 422 DSNL006I 422 DSNL009I 415 DSNL010I 415 DSNL030I 562 DSNL080I 406, 407 DSNL200I 408 DSNL432I 422 DSNL433I 422 DSNL500I 561 DSNL501I 559, 561 DSNL502I 559, 561 DSNL700I 560 DSNL701I 560 DSNL702I 560 DSNL703I 560 DSNL704I 560 DSNL705I 560 DSNM001I 393, 400 DSNM002I 400, 526, 534 DSNM003I 393, 400 DSNM004I 464, 526 DSNM005I 396, 464, 527 DSNP001I 554 DSNP007I 554 DSNP012I 552 DSNR001I 325 DSNR002I 325, 598 DSNR003I 325, 444, 612, 613, 614 DSNR004I 325, 450, 452, 599, 608 DSNR005I 325, 452, 599, 613 DSNR006I 325, 453, 599 DSNR007I 325, 450, 452 DSNR031I 452 DSNT360I 369, 372, 375 DSNT361I 369, 372, 375 DSNT362I 369, 372, 375 DSNT392I 375, 1117 DSNT397I 372, 375 DSNU086I 550, 551
message by identifier (continued) DSNU234I 710 DSNU244I 710 DSNU561I 558 DSNU563I 558 DSNV086E 534 DSNV400I 436 DSNV401I 389, 394, 395, 436, 532 DSNV402I 321, 384, 398, 410, 436 DSNV404I 386, 398 DSNV406I 384, 389, 394, 395, 532 DSNV407I 384 DSNV408I 389, 395, 402, 457, 532 DSNV414I 389, 395, 402, 533 DSNV415I 389, 395, 403, 533 DSNV431I 389 DSNV435I 457 DSNX940I 417 DSNY001I 325 DSNY002I 326 DSNZ002I 325 DXR105E 382 DXR117I 381 DXR1211 383 DXR122E 521 DXR1651 383 EDC3009I 553 IEC161I 546 message processing program (MPP) See MPP (message processing program) MIGRATE command of DFSMShsm (Hierarchical Storage Manager) 107 misc ruddy/dh122 modify ** tonello 1/28/02 709 mixed data altering subtype 88 mode of a lock 825 MODIFY irlmproc,ABEND command of z/OS stopping IRLM 382 MODIFY utility retaining image copies 491 monitor program using IFI 1157 using OMEGAMON 1202 MONITOR1 privilege description 136 MONITOR2 privilege description 136 monitoring application packages 1203 application plans 1203 CAF connections 385 CICS 1193 connections activity 398 databases 369 DB2 1193 DSNC commands for 390 IMS 1193 IRLM 381, 382 server-elapsed time for remote requests 1021 threads 390 tools DB2 trace 1195 monitor trace 1198 performance 1191 TSO connections 385 user-defined functions 378 using IFI 1157 Index
X-23
moving DB2 data 105 MPP (message processing program), connection control 398 multi-character command prefix See command prefix, multi-character multi-site update illustration 472 process 471 multilevel security advantages 192 constraints 206 DB2 resource classes 196 definition 192 discretionary access checking 193 edit procedures 206 field procedures 206 global temporary tables 205 hierarchies for objects 196 implementing 196 in a distributed environment 206 mandatory access checking 193 row-level granularity 197 security category 193 security label 193 security level 193 SNA support 207 SQL statements 198 TCP/IP support 206 triggers 206 using utilities with 204 validation procedures 206 views 205 multiple allegiance 716 multivolume archive log data sets 434 MxxACT DD statement for stand-alone log services OPEN request 1132 MxxARCHV DD statement for stand-alone log services OPEN request 1131 MxxBSDS DD statement for stand-alone log services OPEN request 1131
N
NACTIVE column SYSTABSTATS catalog table 910 NACTIVEF column of SYSTABLESPACE catalog table data collected by RUNSTATS utility 910 naming convention implicitly created table spaces 45 VSAM data sets 38 NEARINDREF column SYSTABLEPART catalog table 908 SYSTABLEPART_HIST catalog table 915 NEAROFFPOSF column SYSINDEXPART catalog table data collected by RUNSTATS utility 907 SYSINDEXPART_HIST catalog table 914 nested table expression processing 979 NetView monitoring errors in the network 420 network ID (NID) See NID (network ID) NID (network ID) indoubt threads 527 thread identification 396 unique value assigned by IMS 396 use with CICS 532
NLEAF column SYSINDEXES catalog table data collected by RUNSTATS utility 906 SYSINDEXES_HIST catalog table 914 SYSINDEXSTATS catalog table 907 SYSINDEXSTATS_HIST catalog table 914 NLEVELS column SYSINDEXES catalog table data collected by RUNSTATS utility 906 SYSINDEXES_HIST catalog table 914 SYSINDEXSTATS catalog table 907 SYSINDEXSTATS_HIST catalog table 914 non-DB2 utilities effect on real-time statistics 1257 noncorrelated subqueries 786 nonsegmented table space dropping 705 locking 823 scan 951 normal read 672 NOT NULL clause CREATE TABLE statement requires presence of data 293 notices, legal 1437 NPAGES column SYSTABLES catalog table 909 SYSTABSTATS catalog table 910 SYSTABSTATS_HIST catalog table 915 NPAGESF column SYSTABLES catalog table data collected by RUNSTATS utilit 909 SYSTABLES_HIST catalog table 915 null value effect on storage space 1111 NUMBER OF LOGS field of panel DSNTIPL 704 NUMCOLUMNS column SYSCOLDIST catalog table access path selection 904 SYSCOLDIST_HIST catalog tableST_HIST catalog table 913 SYSCOLDISTSTATS catalog table 904 numeric data format in storage 1114
O
OASN (originating sequence number) indoubt threads 527 part of the NID 396 object controlling access to 133, 191 ownership 145, 148 object of a lock 827 object registration table (ORT) See registration tables for DDL objects recovering dropped objects 509 offloading active log 430 description 429 messages 432 trigger events 431 OMEGAMON accounting report concurrency scenario 877 overview 646
X-24
Administration Guide
OMEGAMON (continued) description 1191, 1202 scenario using reports 876 statistics report buffer pools 683 DB2 log 701 EDM pool 687 thread queuing 749 online monitor program using IFI 1157 OPEN statement performance 978 operation continuous 12 description 367 log 298 operator CICS 16 commands 317 not required for IMS start 16 START command 18 optimistic concurrency control 852 optimization hints 806 Optimization tools tables used by 1205, 1233 OPTIMIZE FOR n ROWS clause 795 interaction with FETCH FIRST clause 795 OPTIMIZE FOR n ROWS clause effect on distributed performance 1014, 1015 interaction with FETCH FIRST clause 1015 ORDER BY clause effect on OPTIMIZE clause 796 ORGRATIO column SYSLOBSTATS catalog table 908 SYSLOBSTATS_HIST catalog table 915 originating sequence number (OASN) See OASN (originating sequence number) originating task 994 ORT (object registration table) See registration tables for DDL OS/390 environment 13 outer join EXPLAIN report 961 materialization 961 output area used in IFI command request 1161 description 1185 example 1162 WRITE request 1179 output, unsolicited CICS 332 operational control 332 subsystem messages 332 overflow 1118 OWNER qualifies names in plan or package 145 ownership changing 147 ownership of objects establishing 145, 146 privileges 146
P
PACKADM authority description 140
package accounting trace 1197 administrator 171, 176 authorization to execute SQL in 149 binding EXPLAIN option for remote 941 PLAN_TABLE 933 controlling use of DDL 215, 227 inoperative, when privilege is revoked 186 invalidated dropping a view 96 dropping an index 94 when privilege is revoked 186 when table is dropped 89 list privilege needed to include package 164 privileges needed to bind 154 monitoring 1203 privileges description 130 explicit 137 for copying 154 of ownership 146 remote bind 154 retrieving catalog information 190 RLFPKG column of RLST 727 routine 154 SKPT (skeleton package table) 685 page 16-KB 113 32-KB 113 8-KB 113 buffer pool 672 locks description 821 in OMEGAMON reports 875 number of records description 113 root 117 size of index 117 table space 44 page set control records 1120 copying 493 page size choosing 45 choosing for LOBs 46 PAGE_RANGE column of PLAN_TABLE 948 PAGESAVE column SYSTABLEPART catalog table data collected by RUNSTATS utility 908 updated by LOAD and REORG utilities for data compression 711 SYSTABLEPART_HIST catalog table 915 Parallel Access Volumes (PAV) 716 parallel processing description 991 disabling using resource limit facility 733 enabling 997 monitoring 1001 related PLAN_TABLE columns 949 tuning 1004 parallelism See also parallel processing modes 733 PARM option of START DB2 command 325
Index
X-25
partial recovery. See point-in-time recovery participant in multi-site update 471 in two-phase commit 459 partition adding 80, 84 altering 82 boundary changing 82 changing to previous 84 extending 84 compressing data 708 index-controlled, redefining, procedure 557 rotating 83 table-controlled, redefining, procedure 557 partition scan, limited 948 partitioned data set, managing 20 partitioned table space locking 822 partner LU trusting 244 verifying by VTAM 242 PassTicket configuring to send 263 password changing expired ones when using DRDA 239 encrypting, for inbound IDs 244 encrypting, from workstation 263 RACF, encrypted 263 requiring, for inbound IDs 244 sending, with attachment request 262 pattern character examples 223 in DDL registration tables 219 PAV (Parallel Access Volumes) 716 PC option of START irlmproc command 836 PCLOSEN subsystem parameter 697 PCLOSET subsystem parameter 697 PCTFREE effect on DB2 performance 658 PCTPAGES column SYSTABLES catalog table 909 SYSTABLES_HIST catalog table 915 SYSTABSTATS catalog table 910 PCTROWCOMP column SYSTABLES catalog table 711 data collected by RUNSTATS utility 909 SYSTABLES_HIST catalog table 915 SYSTABSTATS catalog table 711, 910 updated by LOAD and REORG for data compression PERCACTIVE column SYSTABLEPART catalog table data collected by RUNSTATS utility 908 SYSTABLEPART_HIST catalog table 915 PERCDROP column SYSTABLEPART catalog table data collected by RUNSTATS utility 909 SYSTABLEPART_HIST catalog table 915 performance affected by cache for authorization IDs 151 CLOSE NO 657 data set distribution 661 EDM and buffer pools 657 extents 665 I/O activity 657
711
performance (continued) affected by (continued) lock size 824 PCTFREE 658 PRIQTY clause 665 secondary authorization IDs 161 storage group 29 monitoring planning 639 RUNSTATS 657 tools 1191 trace 1198 using OMEGAMON 1202 with EXPLAIN 931 performance considerations DPSI 792 scrollable cursor 790 Performance Reporter for OS/390 1203 phases of execution restart 449 PIECESIZE clause ALTER INDEX statement recommendations 661 relation to PRIQTY 665 CREATE INDEX statement recommendations 661 relation to PRIQTY 665 PLAN option of DSNC DISPLAY command 390 PLAN_TABLE table column descriptions 933 report of outer join 961 plan, application See application plan planning auditing 127 security 127 POE See port of entry point of consistency CICS 459 description 427 IMS 459 recovering data 499 single system 459 point-in-time recovery catalog and directory 498 description 504 pointer, overflow 1118 pool, inactive connections 742 populating tables 61 port of entry 246, 252 RACF APPCPORT class 273 RACF SERVAUTH class 273 postponed abort unit of recovery 462 power failure recovery scenario, z/OS 522 PQTY column SYSINDEXPART catalog table data collected by RUNSTATS utility 907 SYSINDEXPART_HIST catalog table 914 SYSTABLEPART catalog table data collected by RUNSTATS utility 909 SYSTABLEPART_HIST catalog table 915 predicate description 755 evaluation rules 759
X-26
Administration Guide
predicate (continued) filter factor 766 generation 775 impact on access paths 755 indexable 757 join 756 local 756 modification 775 properties 755 stage 1 (sargable) 757 stage 2 evaluated 757 influencing creation 801 subquery 756 PREFORMAT option of LOAD utility 664 option of REORG TABLESPACE utility 664 preformatting space for data sets 664 primary authorization ID See authorization ID, primary PRIMARY_ACCESSTYPE column of PLAN_TABLE 946 PRINT command of access method services 508 print log map utility before fall back 617 control of data set access 282 prints contents of BSDS 380, 445 prioritizing resources 721 privilege description 133 executing an application plan 130 exercised by type of ID 161 exercised through a plan or package 148, 154 explicitly granted 133, 143 granting 130, 173, 181, 187 implicitly held 145, 148 needed for various roles 171 ownership 146 remote bind 154 remote users 174 retrieving catalog information 187, 191 revoking 181 routine plans, packages 154 types 134, 138 used in different jobs 171 privilege selection, sample security plan 302 problem determination using OMEGAMON 1202 PROCEDURE privilege 136 process description 128 processing attachment requests 245, 258 connection requests 233, 235 sign-on requests 236, 238 processing speed processor resources consumed accounting trace 650, 1199 buffer pool 679 fixed-length records 667 thread creation 739 thread reuse 666 traces 666 transaction manager 1195 RMF reports 1194 time needed to perform I/O operations 660 PROCLIM option of IMS TRANSACTION macro 749
production binder description 171 privileges 178 Profile tables 1233 DSN_VIRTUAL_INDEXES 1233 project activity sample table 1043 project sample table 1042 protocols SNA 242 TCP/IP 249 PSB name, IMS 329 PSEUDO_DELETED_ENTRIES column SYSINDEXPART catalog table 907 SYSINDEXPART_HIST catalog table 914 PSRCP (page set recovery pending) status description 63 PSTOP transaction type 398 PUBLIC AT ALL LOCATIONS clause GRANT statement 174 PUBLIC clause GRANT statement 173 PUBLIC identifier 173 PUBLIC* identifier 174
Q
QMF (Query Management Facility) database for each user 44 options 750 performance 750 QSAM (queued sequential access method) 433 qualification area used in IFI description 1128 description of fields 1165 READS request 1164 restricted IFCIDs 1164 restrictions 1172 qualified objects ownership 146 QUALIFIER qualifies names in plan or package 145 Query Management Facility (QMF) See QMF (Query Management Facility) query parallelism 991 QUERYNO clause reasons to use 810 queued sequential access method (QSAM) See QSAM (queued sequential access method) QUIESCE option STOP DB2 command 400, 448
R
RACF (Resource Access Control Facility) authorizing access to data sets 132, 281, 283 access to protected resources 267 access to server resource class 275 group access 271 IMS access profile 271 SYSADM and SYSOPR authorities 271 checking connection processing 233, 235 inbound remote IDs 243 sign-on processing 236, 238
Index
X-27
RACF (Resource Access Control Facility) (continued) defining access profiles 266 DB2 resources 265, 278 protection for DB2 264, 278 remote user IDs 271 started procedure table 268 user ID for DB2 started tasks 267 description 131 PassTickets 263 passwords, encrypted 263 typical external security system 231 when supplying secondary authorization ID 235, 237 RACF access control module 1114 RBA (relative byte address) description 1115 range shown in messages 432 RCT (resource control table) DCT entry 388 ERRDEST option 332, 388 re-creating tables 90 read asynchronously (READA) 1177 read synchronously (READS) 1162 READA (read asynchronously) 1177 reading normal read 672 sequential prefetch 672 READS (read synchronously) 1162, 1164 real storage 714 REAL TIME STATS field of panel DSNTIPO 1237 real-time statistics accuracy 1260 for DEFINE NO objects 1258 for read-only objects 1258 for TEMP table spaces 1258 for work file table spaces 1258 improving concurrency 1260 in data sharing 1259 when DB2 externalizes 1250 real-time statistics tables altering 1236 contents 1237 creating 1236 description 1235 effect of dropping objects 1258 effect of mass delete operations 1259 effect of SQL operations 1258 effect of updating partitioning keys 1259 establishing base values 1237 INDEXSPACESTATS 1236 recovering 1260 setting up 1235 setting update interval 1237 starting 1237 TABLESPACESTATS 1236 reason code X00C90088 816 X00C9008E 815 REBIND PACKAGE subcommand of DSN options ISOLATION 850 OWNER 148 RELEASE 847
REBIND PLAN subcommand of DSN options ACQUIRE 847 ISOLATION 850 OWNER 148 RELEASE 847 rebinding after creating an index 94 after dropping a view 96 automatically EXPLAIN processing 941 REBUILD INDEX utility effect on real-time statistics 1255 REBUILD-pending status description for indexes 477 record performance considerations 113 size 113 RECORDING MAX field of panel DSNTIPA preventing frequent BSDS wrapping 615 RECOVER BSDS command copying good BSDS 441 RECOVER INDOUBT command free locked resources 532 recover indoubt thread 469 RECOVER privilege description 136 RECOVER TABLESPACE utility DFSMSdss concurrent copy 495 recovers data modified after shutdown 618 RECOVER utility cannot use with work file table space 497 catalog and directory tables 498 data inconsistency problems 490 deferred objects during restart 455 functions 495 kinds of objects 495 messages issued 495 options TOCOPY 504 TOLOGPOINT 504 TOLOGPOINT in application program error 524 TORBA 505 problem on DSNDB07 497 recovers pages in error 374 running in parallel 492 use of fast log apply during processing 492 RECOVER utility, DFSMSdss RESTORE command 36 RECOVERDB privilege description 135 recovery BSDS 543 catalog and directory 498 data set using DFSMS 495 using DFSMShsm 481 using non-DB2 dump and restore 508 database active log 1115 using a backup copy 477 using RECOVER TOCOPY 504 using RECOVER TOLOGPOINT 504 using RECOVER TORBA 505 down-level page sets 548 dropped objects 509 dropped table 509 dropped table space 511
X-28
Administration Guide
recovery (continued) IFI calls 1190 indexes 477 indoubt threads 588 indoubt units of recovery CICS 389, 531 IMS 395 media 496 minimizing outages 481 multiple systems environment 462 operation 478 point-in-time 504 prior point of consistency 499 real-time statistics tables 1260 reducing time 480 reporting information 484 restart 486, 617 scenarios See recovery scenarios subsystem 1115 system procedures 475 table space COPY 508 dropped 511 DSN1COPY 508 point-in-time 486 QUIESCE 486 RECOVER TOCOPY 504 RECOVER TOLOGPOINT 504 RECOVER TORBA 505 scenario 550 work file table space 497 recovery log description 8 record formats 1123 RECOVERY option of REPORT utility 524 recovery scenarios application program error 524 CICS-related failures application failure 529 attachment facility failure 534 manually recovering indoubt units of recovery 531 not operational 529 DB2-related failures active log failure 535 archive log failure 539 BSDS 542 catalog or directory I/O errors 551 database failures 546 subsystem termination 534 system resource failures 535 table space I/O errors 550 disk failure 522 failure during log initialization or current status rebuild 599 IMS-related failures 525 application failure 528 control region failure 526 fails during indoubt resolution 526 indoubt threads 588 integrated catalog facility catalog VVDS failure 552 invalid LOB 550 IRLM failure 521 out of space 554 restart 597 starting 324 z/OS failure 522
RECP (RECOVERY-pending) status description 63 redefining an index-based partition 557 redefining an table-based partition 557 redo log records 1116 REFERENCES privilege description 134 referential constraint adding to existing table 77 data consistency 294 recovering from violating 558 referential structure, maintaining consistency for recovery refresh age 894 REFRESH TABLE statement 891 registering a base table as 888 registration tables for DDL adding columns 216, 228 CREATE statements 226 creating 216 escape character 219 examples 220, 225 function 215, 227 indexes 216 managing 216 pattern characters 219 preparing for recovery 477 updating 228 relative byte address (RBA) See RBA (relative byte address) relative byte address (RBA). See RBA (relative byte address) RELCURHL subsystem parameter recommendation 845 RELEASE option of BIND PLAN subcommand combining with other options 847 RELEASE LOCKS field of panel DSNTIP4 effect on page and row locks 861 recommendation 845 remote logical unit, failure 561 remote request 242, 252 reoptimizing access path 779 REORG privilege description 135 REORG UNLOAD EXTERNAL 106 REORG utility effect on real-time statistics 1253 examples 87 moving data 106 REPAIR privilege description 135 REPAIR utility resolving inconsistent data 624 REPORT utility options RECOVERY 524 TABLESPACESET 524 table space recovery 484 REPRO command of access method services 508, 543 RESET INDOUBT command reset indoubt thread 470 residual recovery entry (RRE) See RRE (residual recovery entry) Resource Access Control Facility (RACF) See RACF (Resource Access Control Facility) resource allocation 737 performance factors 737 Index
491
X-29
resource control table (RCT) See RCT (resource control table) resource limit facility (governor) calculating service units 732 database 10 description 722 distributed environment 722 governing by plan or package 729 preparing for recovery 477 specification table (RLST) See RLST (resource limit specification table) stopping and starting 724 resource manager resolution of indoubt units of recovery 465 Resource Measurement Facility (RMF) 1191, 1194 resource objectives 720 Resource Recovery Services (RRS), controlling connections 401 Resource Recovery Services attachment facility (RRSAF) RACF profile 274 stored procedures and RACF authorization 274 RESOURCE TIMEOUT field of panel DSNTIPI 837 resource translation table (RTT) See RTT (resource translation table) resources defining to RACF 265 limiting 721 response time 667 restart See also conditional restart See also restarting automatic 453 backward log recovery failure during 613 phase 452 cold start situations 619 conditional control record governs 455 excessive loss of active log data 620 total loss of log 619 current status rebuild failure during 599 phase 450 data object availability 454 DB2 447 deferring processing objects 454 effect of lost connections 463 forward log recovery failure during 608 phase 451 log initialization failure during 599 phase 449 multiple systems environment 462 normal 448 overriding automatic 454 preparing for recovery 486 recovery operations for 457 resolving inconsistencies after 622 unresolvable BSDS problems during 616 log data set problems during 616 RESTART ALL field of panel DSNTIPS 454 RESTORE phase of RECOVER utility 496 RESTORE SYSTEM utility 37 restoring data to a prior level 499
RETAINED LOCK TIMEOUT field of installation panel DSNTIPI 838 RETLWAIT subsystem parameter 838 REVOKE statement cascading effect 180 delete a view 185 examples 180, 187 format 180 invalidates a plan or package 186 privileges required 164 revoking SYSADM authority 186 RID (record identifier) pool size 689 storage allocation 689 estimation 689 use in list prefetch 974 RLFASUERR column of RLST 727 RLFASUWARN column of RLST 727 RLST (resource limit specification table) columns 725 creating 723 distributed processing 731 precedence of entries 728 RMF (Resource Measurement Facility) 1191, 1194 RO SWITCH CHKPTS field of installation panel DSNTIPL 697 RO SWITCH TIME field of installation panel DSNTIPL 697 rollback effect on performance 703 maintaining consistency 461 unit of recovery 428 root page description 117 index 117 route codes for messages 323 routine example, authorization 157 plans, packages 154 retrieving information about authorization IDs 190 routine privileges 136 row formats for exit routines 1110 validating 1084 row-level security security label column 198 using SQL statements 198 ROWID index-only access 946 ROWID column inserting 65 loading data into 62 RR (repeatable read) claim class 869 drain lock 870 effect on locking 852 how locks are held (figure) 856 page and row locking 856 RRDF (Remote Recovery Data Facility) altering a table for 87 RRE (residual recovery entry) detect 396 logged at IMS checkpoint 464 not resolved 464 purge 396
X-30
Administration Guide
RRSAF (Recoverable Resource Manager Services attachment facility) application program authorization 151 transactions using global transactions 821 RRSAF (Resource Recovery Services attachment facility) application program running 331 RS (read stability) claim class 869 effect on locking 851 page and row locking (figure) 855 RTT (resource translation table) transaction type 398 RUN subcommand of DSN example 327 RUNSTATS utility aggregate statistics 916 effect on real-time statistics 1255 timestamp 919 use tuning DB2 657 tuning queries 916 RVA (RAMAC Virtual Array) backup 495
S
sample application databases, for 1052 structure of 1051 sample exit routine connection location 1056 processing 1061 supplies secondary IDs 234 edit 1081 sign-on location 1056 processing 1061 supplies secondary IDs 237 sample library See SDSNSAMP library sample security plan new application 174, 180 sample table 1035 DSN8810.ACT (activity) 1035 DSN8810.DEMO_UNICODE (Unicode sample ) 1045 DSN8810.DEPT (department) 1036 DSN8810.EMP (employee) 1038 DSN8810.EMP_PHOTO_RESUME (employee photo and resume) 1041 DSN8810.EMPPROJACT (employee-to-project activity) 1044 DSN8810.PROJ (project) 1042 PROJACT (project activity) 1043 views on 1046 SBCS data altering subtype 88 scheduled tasks checking status 347 defining 337 listing 347 removing 335, 348
schema privileges 136 schema definition authorization to process 59 description 58 example of processor input 59 processing 59 processor 58 scope of a lock 821 SCOPE option START irlmproc command 836 scrollable cursor block fetching 1011 optimistic concurrency control 852 performance considerations 790 SCT02 table space description 8 placement of data sets 698 SDSNLOAD library loading 397 SDSNSAMP library processing schema definitions 59 SECACPT option of APPL statement 243 secondary authorization ID See authorization ID, secondary SECQTY1 column SYSINDEXPART_HIST catalog table 914 SECQTYI column SYSINDEXPART catalog table 907 SYSTABLEPART catalog table 909 SYSTABLEPART_HIST catalog table 915 SecureWay Security Server for OS/390 19 security acceptance options 243 access to data 127 DB2 data sets 281 administrator privileges 171 authorizations for stored procedures 156 CICS 279 closed application 215, 227 DDL control registration tables 215 description 127 IMS 279 measures in application program 153 measures in force 290 mechanisms 238 planning 127 sample security plan 301 system, external 231 security administrator 171 SECURITY column of SYSIBM.SYSROUTINES catalog table, RACF access to non-DB2 resources 277 security label definition 193 security label column 198 segment of log record 1120 segmented table space locking 822 scan 952 SEGSIZE clause of CREATE TABLESPACE recommendations 952 SELECT privilege description 134 SELECT statement example SYSIBM.SYSPLANDEP 90 Index
X-31
SELECT statement (continued) example (continued) SYSIBM.SYSTABLEPART 68 SYSIBM.SYSVIEWDEP 90 sequence privileges of ownership 146 sequences improving concurrency 820 sequential detection 976, 977 sequential prefetch bind time 974 description 973 sequential prefetch threshold (SPTH) 674 SET ARCHIVE command description 320 SET CURRENT DEGREE statement 997 SET CURRENT SQLID statement 134 SET ENCRYPTION PASSWORD 208 SHARE INTENT EXCLUSIVE lock mode 826, 866 lock mode LOB 866 page 825 row 825 table, partition, and table space 825 SHDDEST option of DSNCRCT macro 332 sign-on exit point 1057 exit routine. See sign-on exit routine processing See sign-on processing requests 1057 sign-on exit routine debugging 1063 default 237 description 1055 performance considerations 1062 sample 237 location 1056 provides secondary IDs 1061 secondary authorization ID 237 using 237 writing 1055 sign-on processing choosing for remote requests 243 initial primary authorization ID 236 invoking RACF 236 requests 232 supplying secondary IDs 237 usage 232 using exit routine 237 SIGNON-ID option of IMS 329 simple table space locking 822 single logging 9 SKCT (skeleton cursor table) description 8 EDM pool 685 EDM pool efficiency 687 locks on 829 skeleton cursor table (SKCT) See SKCT (skeleton cursor table) skeleton package table (SKPT) See SKPT (skeleton package table) SKPT (skeleton package table) description 8
SKPT (skeleton package table) (continued) EDM pool 685 locks on 829 SMF (System Management Facility) buffers 1200 measured usage pricing 667 record types 1196, 1198 trace record accounting 1198 auditing 289 format 1139 lost records 1200 recording 1200 statistics 1196 type 89 records 667 SMS (Storage Management Subsystem) See DFSMS (Data Facility Storage Management Subsystem) SMS archive log data sets 434 SNA mechanisms 238 protocols 242 software protection 292 sort description 690 pool 690 program reducing unnecessary use 713 RIDs (record identifiers) 978 when performed 978 removing duplicates 978 shown in PLAN_TABLE 977 SORT POOL SIZE field of panel DSNTIPC 690 sorting sequence, altering by a field procedure 1093 space attributes 69 specifying 82 SPACE column SYSTABLEPART catalog table 909 SPACE column of SYSTABLESPACE catalog table data collected by RUNSTATS utility 910 space reservation options 658 SPACEF column SYSINDEXES catalog table 906 SYSINDEXPART catalog table 907 SYSINDEXPART_HIST catalog table 914 SYSTABLEPART catalog table 909 SYSTABLEPART_HIST catalog table 915 SYSTABLES catalog table 909 SPACEF column of SYSTABLESPACE catalog table data collected by RUNSTATS utility 910 SPACENAM option DISPLAY DATABASE command 372 START DATABASE command 369 speed, tuning DB2 657 SPT01 table space 8 SPTH (sequential prefetch threshold) 674 SPUFI disconnecting 387 resource limit facility (governor) 729 SQL (Structured Query Language) performance trace 737 statement cost 738 statements See SQL statements performance factors 738 transaction unit of recovery 427 SQL authorization ID See authorization ID, SQL
X-32
Administration Guide
SQL statements DECLARE CURSOR to ensure block fetching 1011 EXPLAIN monitor access paths 931 RELEASE 1009 SET CURRENT DEGREE 997 SQLCA (SQL communication area) reason code for deadlock 816 reason code for timeout 815 SQLCODE -30082 240 -510 860 -905 728 SQLSTATE '08001' 240 '57014' 728 SQTY column SYSINDEXPART catalog table 907 SYSTABLEPART catalog table 909 SSM (subsystem member) error options 398 specified on EXEC parameter 397 thread reuse 748 SSR command of IMS entering 320 prefix 333 stand-alone utilities recommendation 380 standard, SQL (ANSI/ISO) schemas 58 star join 967 dedicated virtual memory pool 971 star schema defining indexes for 801 START DATABASE command example 368 problem on DSNDB07 497 SPACENAM option 369 START DB2 command description 325 entered from z/OS console 324 mode identified by reason code 400 PARM option 325 restart 455 START FUNCTION SPECIFIC command starting user-defined functions 377 START REGION command of IMS 400 START SUBSYS command of IMS 392 START TRACE command AUDIT option 286 controlling data 423 STARTDB privilege description 135 started procedures table in RACF 270 started-task address space 267 starting audit trace 286 databases 368 DB2 after an abend 326 process 324 IRLM process 381 table space or index space having restrictions user-defined functions 377
369
state of a lock 825 statement table column descriptions 986 static SQL privileges required 164 statistics aggregate 916 created temporary tables 912 distribution 919 filter factor 910 history catalog tables 913, 917 materialized query tables 892 partitioned table spaces 912 trace class 4 1017 description 1196 STATS privilege description 135 STATSTIME column use by RUNSTATS 904 status CHECK-pending resetting 63 COPY-pending, resetting 63 STATUS column of DISPLAY DATABASE report STDDEV function when evaluation occurs 950 STOGROUP privilege description 138 STOP DATABASE command example 376 problem on DSNDB07 497 SPACENAM option 369 timeout 815 STOP DDF command description 421 STOP FUNCTION SPECIFIC command stopping user-defined functions 378 STOP REGION command of IMS 400 STOP SUBSYS command of IMS 392, 400 STOP TRACE command AUDIT option 286 description 423 STOP transaction type 398 STOPALL privilege description 136 STOPDB privilege description 135 stopping audit trace 286 data definition control 228 databases 375 DB2 327 IRLM 382 user-defined functions 378 storage auxiliary 29 calculating locks 837 controller cache 715 external See auxiliary storage hierarchy 714 IFI requirements READA 1177
370
Index
X-33
storage (continued) IFI requirements (continued) READS 1164 real 714 space of dropped table, reclaiming 89 using DFSMShsm to manage 35 storage controller cache 715 storage group, DB2 adding volumes 67 altering 67 assigning objects to 29 changing to SMS-managed 67 changing to use a new high-level qualifier 104 creating 29 default group 29 definition 27 description 5, 29 managing control interval sizing 28 deferring allocation 28 defining data sets 27 deleting data sets 28 extending data sets 28 moving data sets 28 reusing data sets 28 using SMS 30 moving data 108 order of use 29 privileges of ownership 146 SYSDEFLT 29 storage group, for sample application data 1052 storage management subsystem See DFSMS (Data Facility Storage Management Subsystem) stored procedure address space 268 altering 96 authority to access non-DB2 resources 277 authorizations 154, 156 commands 417 common SQL API 1386 DNSLEUSR 1295 DSNACCOR 1263 DSNACICS 1287 DSNAEXP 1306 example, authorization 157 IMS transactions 1302 IMS transsactions 1298 limiting resources 722 monitoring using accounting trace 1026 privileges of ownership 146 RACF protection for 274 running concurrently 1023 WLM_REFRESH 1285 stored procedures ADMIN_COMMAND_DB2 1309 ADMIN_COMMAND_DSN 1321 ADMIN_COMMAND_UNIX 1324 ADMIN_DB_BROWSE 1327 ADMIN_DB_DELETE 1331 ADMIN_DS_LIST 1333 ADMIN_DS_RENAME 1339 ADMIN_DS_SEARCH 1342 ADMIN_DS_WRITE 1344 ADMIN_INFO_HOST 1349 ADMIN_INFO_SSID 1352 ADMIN_INFO_SYSPARM 1354
stored procedures (continued) ADMIN_JOB_CANCEL 1357 ADMIN_JOB_FETCH 1360 ADMIN_JOB_QUERY 1363 ADMIN_JOB_SUBMIT 1366 ADMIN_TASK_ADD 339 ADMIN_TASK_REMOVE 349 ADMIN_UTL_SCHEDULE 1369 ADMIN_UTL_SORT 1379 common SQL API Complete mode 1388 XML input document 1388 XML output document 1389 XML parameter documents 1387 GET_CONFIG 1391 filtering output 1390 GET_MESSAGE 1410 filtering output 1390 GET_SYSTEM_INFO 1418 filtering output 1390 scheduled execution 364 STOSPACE privilege description 136 string conversion exit routine. See conversion procedure subquery correlated tuning 785 join transformation 788 noncorrelated 786 tuning 785 tuning examples 790 subsystem controlling access 132, 231 recovery 1115 termination scenario 534 subsystem command prefix 11 subsystem member (SSM) See SSM (subsystem member) subtypes 88 synchronous data from IFI 1173 synchronous write analyzing accounting report 650 immediate 674, 685 synonym privileges of ownership 146 syntax diagram how to read xxvi SYS1.LOGREC data set 535 SYS1.PARMLIB library specifying IRLM in IEFSSNxx member 380 SYSADM authority description 142 revoking 186 SYSCOPY catalog table, retaining records in 514 SYSCTRL authority description 141 SYSIBM.ADMIN_TASKS 357 SYSIBM.IPNAMES table of CDB remote request processing 254 translating outbound IDs 254 SYSIBM.LUNAMES table of CDB accepting inbound remote IDs 240, 253 dummy row 243 remote request processing 240, 253 sample entries 247
X-34
Administration Guide
SYSIBM.LUNAMES table of CDB (continued) translating inbound IDs 247 translating outbound IDs 240, 253 verifying attachment requests 243 SYSIBM.USERNAMES table of CDB managing inbound remote IDs 243 remote request processing 240, 253 sample entries for inbound translation 247 sample entries for outbound translation 261 translating inbound and outbound IDs 240, 253 SYSLGRNX directory table information from the REPORT utility 484 table space description 8 retaining records 514 SYSOPR authority description 140 usage 324 Sysplex query parallelism disabling Sysplex query parallelism 1005 disabling using buffer pool threshold 675 processing across a data sharing group 995 splitting large queries across DB2 members 991 system management functions, controlling 422 privileges 136 structures 7 system administrator description 171 privileges 175 System Management Facility (SMF) See SMF (System Management Facility) System Management Facility (SMF). See SMF (System Management Facility) system monitoring monitoring tools DB2 trace 1195 system operator See SYSOPR authority system programmer 172 system-directed access authorization at second server 150 SYSUTILX directory table space 8
T
table altering adding a column 71 auditing 288 creating description 48 description 6 dropping implications 89 estimating storage 112 expression, nested processing 979 locks 821 ownership 146 populating loading data into 61 privileges 134, 146 qualified name 146 re-creating 90 recovery of dropped 509 registration, for DDL 215, 227
table (continued) retrieving IDs allowed to access 189 plans and packages that can access 190 types 48 table check constraints adding 80 dropping 80 table expressions, nested materialization 980 table space compressing data 708 copying 493 creating default database 45 default name 45 default space allocation 45 default storage group 45 description 44 explicitly 44 implicitly 45 deferring allocation of data sets 30 description 5 dropping 69 EA-enabled 42 for sample application 1052 loading data into 61 locks control structures 737 description 821 maximum addressable range 44 privileges of ownership 146 quiescing 486 re-creating 69 recovery See recovery, table space recovery of dropped 511 reorganizing 76 scans access path 951 determined by EXPLAIN 932 versions 76 recycling version numbers 77 table-controlled partitioning automatic conversion to 52 contrasted with index-controlled partitioning 52 implementing 51 using nullable partitioning columns 53 tables 1205 tables used in examples 1035 TABLESPACE privilege description 138 TABLESPACESET option of REPORT utility 524 TABLESPACESTATS contents 1237 real-time statistics table 1236 TCP/IP authorizing DDF to connect 278 keep_alive interval 744 protocols 249 temporary table monitoring 699 thread reuse 739 temporary work file See work file TERM UTILITY command when not to use 491 Index
X-35
terminal monitor program (TMP) See TMP (terminal monitor program) terminating See also stopping DB2 abend 448 concepts 447 normal 447 normal restart 448 scenario 534 thread allied 404 attachment in IMS 393 CICS access to DB2 390 creation CICS 748 connections 749 description 736 IMS 748 performance factors 736 database access description 404 displaying CICS 390 IMS 398 distributed active 743 inactive vs. active 742 maximum number 741 pooling of inactive threads 742 monitoring in 390 queuing 749 reuse CICS 748 description 736 effect on processor resources 666 IMS 748 remote connections 745 TSO 739 when to use 739 steps in creation and termination 736 termination CICS 388, 748 description 739 IMS 394, 400, 748 time out for idle distributed threads 743 type 2, storage usage 713 TIME FORMAT field of panel DSNTIPF 1088 time routine description 1087 writing 1087 timeout changing multiplier IMS BMP and DL/I batch 838 utilities 840 description 815 idle thread 743 multiplier values 838 row vs. page locks 843 X00C9008E reason code in SQLCA 815 Tivoli Decision Support for OS/390 1203 TMP (terminal monitor program) DSN command processor 385 sample job 330 TSO batch work 330 TOCOPY option of RECOVER utility 504
TOKENE option of DSNCRCT macro 656 TOLOGPOINT option of RECOVER utility 504 TORBA option of RECOVER utility 505 trace accounting 1196 audit 1198 controlling DB2 423 IMS 424 description 1191, 1195 diagnostic CICS 424 IRLM 424 distributed data 1017 effect on processor resources 666 interpreting output 1139 monitor 1198 performance 1198 recommendation 1017 record descriptions 1139 record processing 1139 statistics description 1196 TRACE privilege description 136 TRACE SUBSYS command of IMS 392 tracker site 574 transaction CICS accessing 390 DSNC codes 321 entering 329 IMS connecting to DB2 392 entering 328 thread attachment 393 thread termination 394 using global transactions 821 SQL unit of recovery 427 transaction lock description 813 transaction manager coordinating recovery of distributed transactions TRANSACTION option DSNC DISPLAY command 390 transaction types 398 translating inbound authorization IDs 247 outbound authorization IDs 260 truncation active log 431, 607 TSO application programs batch 17 conditions 327 foreground 17 running 327 background execution 330 commands issued from DSN session 328 connections controlling 384 DB2 385 disconnecting from DB2 387 monitoring 385 tuning 749 DB2 considerations 17
465
X-36
Administration Guide
TSO (continued) DSNELI language interface module IFI 1159 link editing 327 entering DB2 commands 321 environment 327 foreground 739 requirement 17 resource limit facility (governor) 721 running SQL 739 tuning DB2 active log size 704 catalog location 699 catalog size 699 disk utilization 707 queries containing host variables 779 speed 657 virtual storage utilization 712 two-phase commit illustration 459 process 459 TYPE column SYSCOLDI 913 SYSCOLDIST catalog table access path selection 904 SYSCOLDISTSTATS catalog table 904
U
undo log records 1116 Unicode sample table 1045 UNION clause effect on OPTIMIZE clause 796 removing duplicates with sort 978 unit of recovery description 427 ID 1123 illustration 427 in-abort backward log recovery 452 description 462 excluded in forward log recovery 451 in-commit description 461 included in forward log recovery 451 indoubt causes inconsistent state 448 definition 326 description 461 displaying 394, 532 included in forward log recovery 451 recovering CICS 389 recovering IMS 395 recovery in CICS 531 recovery scenario 526 resolving 464, 465, 470 inflight backward log recovery 452 description 461 excluded in forward log recovery 451 log records 1116 postponed displaying 395 postponed abort 462 rollback 428, 461
unit of recovery (continued) SQL transaction 427 unit of recovery ID (URID) 1123 UNLOAD utility delimited files 62 unqualified objects, ownership 145 unsolicited output CICS 323, 332 IMS 323 operational control 332 subsystem messages 332 UPDATE lock mode page 825 row 825 table, partition, and table space 825 update efficiency 684 UPDATE privilege description 134 updating registration tables for DDL 228 UR (uncommitted read) claim class 869 concurrent access restrictions 857 effect on locking 851 effect on reading LOBs 865 page and row locking 854 recommendation 820 URID (unit of recovery ID). See unit of recovery USAGE privilege distinct type 138 Java class 138 sequence 138 USE AND KEEP EXCLUSIVE LOCKS option of WITH clause 862 USE AND KEEP SHARE LOCKS option of WITH clause USE AND KEEP UPDATE LOCKS option of WITH clause 862 USE OF privileges 138 user analyst 171 user-defined function controlling 377 START FUNCTION SPECIFIC command 377 example, authorization 157 monitoring 378 privileges of ownership 146 providing access cost 1025 starting 377 stopping 378 user-defined functions altering 96 controlling 377 user-defined table function improving query performance 798 user-managed data sets changing high-level qualifier 104 extending 41 name format 38 requirements 38 specifying data class 42 utilities access status needed 379 compatibility 871 concurrency 813, 868 controlling 379 description 11 Index
862
X-37
utilities (continued) effect on real-time statistics 1250 executing running on objects with pages in LPL 374 internal integrity reports 299 timeout multiplier 840 types RUNSTATS 916 UTILITY TIMEOUT field of panel DSNTIPI 840 UTSERIAL lock 871
V
validating connections from remote application 238 existing rows with a new VALIDPROC 87 rows of a table 1084 validation routine altering assignment 86 checking existing table rows 87 description 294 ensuring data accuracy 294 row formats 1110 writing 1084 validation routine. See also VALIDPROC clause description 1084 VALIDPROC clause ALTER TABLE statement 86 exit points 1085 value descriptors in field procedures 1098 VARCHAR data type subtypes 88 VARIANCE function when evaluation occurs 950 VARY NET command of VTAM TERM option 416 varying-length records effect on processor resources 667 VDWQT option of ALTER BUFFERPOOL command 675 verifying VTAM partner LU 242 vertical deferred write threshold (VDWQT) 675 view altering 96 creating on catalog tables 191 description 7 dropping deleted by REVOKE 185 invalidates plan or package 96 EXPLAIN 982, 983 list of dependent objects 90 name qualified name 146 privileges effect of revoking table privileges 185 ownership 146 table privileges for 143 processing view materialization description 980 view materialization in PLAN_TABLE 948 view merge 979 reasons for using 7 virtual buffer pool assisting parallel sequential threshold (VPXPSEQT) 675
virtual buffer pool parallel sequential threshold (VPPSEQT) 675 virtual buffer pool sequential steal threshold (VPSEQT) 675 virtual storage buffer pools 712 improving utilization 712 IRLM 712 virtual storage access method (VSAM) See VSAM (virtual storage access method) Virtual Telecommunications Access Method (VTAM) See VTAM (Virtual Telecommunications Access Method) Visual Explain 794, 930, 931 volatile table 798 volume serial number 442 VPPSEQT option of ALTER BUFFERPOOL command 675 VPSEQT option of ALTER BUFFERPOOL command 675 VPXPSEQT option of ALTER BUFFERPOOL command 675 VSAM (virtual storage access method) control interval block size 433 log records 429 processing 508 volume data set (VVDS) recovery scenario 552 VTAM (Virtual Telecommunications Access Method) APPL statement See APPL statement commands VARY NET,TERM 416 controlling connections 242, 267 conversation-level security 243 partner LU verification 242 password choosing 242 VVDS recovery scenario 552
W
wait state at start 325 WBUFxxx field of buffer information area 1161 WebSphere description, attachment facility 15 WebSphere Application Server identify outstanding indoubt units of recovery WITH clause specifies isolation level 862 WITH HOLD cursor effect on locks and claims 861 WLM_REFRESH stored procedure description 1285 option descriptions 1286 sample JCL 1287 syntax diagram 1286 work file table space minimize I/O contention 663 used by sort 691 work file database changing high-level qualifier 103 description 10 enlarging 557 error range recovery 497 minimizing I/O contention 663 problems 497 starting 368 used by sort 712 Workload Manager 745 WQAxxx fields of qualification area 1128, 1165
465
X-38
Administration Guide
write claim class 869 write drain lock 870 write efficiency 684 write error page range (WEPR) 373 WRITE function of IFI 1179 WRITE TO OPER field of panel DSNTIPA write-down control 195
431
X
XLKUPDLT subsystem parameter 845 XML input document common SQL API 1388 XML input documents versioning 1387 XML message documents versioning 1387 XML output document common SQL API 1389 XML output documents versioning 1387 XML parameter documents versioning 1387 XRC (Extended Remote Copy) 587 XRF (extended recovery facility) CICS toleration 476 IMS toleration 476
Z
z/OS command group authorization level (SYS) commands MODIFY irlmproc 382 STOP irlmproc 382 DB2 considerations 13 entering DB2 commands 320, 323 environment 13 IRLM commands control 318 performance options 717 power failure recovery scenario 522 workload manager 745 320, 323
Index
X-39
X-40
Administration Guide
Printed in USA
SC18-7413-09